Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
A multi-camera system for real-time pose estimation
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin
2007-04-01
This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
User-assisted visual search and tracking across distributed multi-camera networks
NASA Astrophysics Data System (ADS)
Raja, Yogesh; Gong, Shaogang; Xiang, Tao
2011-11-01
Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
The Potential of Low-Cost Rpas for Multi-View Reconstruction of Sub-Vertical Rock Faces
NASA Astrophysics Data System (ADS)
Thoeni, K.; Guccione, D. E.; Santise, M.; Giacomini, A.; Roncella, R.; Forlani, G.
2016-06-01
The current work investigates the potential of two low-cost off-the-shelf quadcopters for multi-view reconstruction of sub-vertical rock faces. The two platforms used are a DJI Phantom 1 equipped with a Gopro Hero 3+ Black and a DJI Phantom 3 Professional with integrated camera. The study area is a small sub-vertical rock face. Several flights were performed with both cameras set in time-lapse mode. Hence, images were taken automatically but the flights were performed manually as the investigated rock face is very irregular which required manual adjustment of the yaw and roll for optimal coverage. The digital images were processed with commercial SfM software packages. Several processing settings were investigated in order to find out the one providing the most accurate 3D reconstruction of the rock face. To this aim, all 3D models produced with both platforms are compared to a point cloud obtained with a terrestrial laser scanner. Firstly, the difference between the use of coded ground control targets and the use of natural features was studied. Coded targets generally provide the best accuracy, but they need to be placed on the surface, which is not always possible, as sub-vertical rock faces are not easily accessible. Nevertheless, natural features can provide a good alternative if wisely chosen as shown in this work. Secondly, the influence of using fixed interior orientation parameters or self-calibration was investigated. The results show that, in the case of the used sensors and camera networks, self-calibration provides better results. To support such empirical finding, a numerical investigation using a Monte Carlo simulation was performed.
Interior view showing south entrance; camera facing south. Mare ...
Interior view showing south entrance; camera facing south. - Mare Island Naval Shipyard, Machine Shop, California Avenue, southwest corner of California Avenue & Thirteenth Street, Vallejo, Solano County, CA
VIEW OF EAST ELEVATION; CAMERA FACING WEST Mare Island ...
VIEW OF EAST ELEVATION; CAMERA FACING WEST - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
VIEW OF SOUTH ELEVATION; CAMERA FACING NORTH Mare Island ...
VIEW OF SOUTH ELEVATION; CAMERA FACING NORTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
VIEW OF WEST ELEVATION: CAMERA FACING NORTHEAST Mare Island ...
VIEW OF WEST ELEVATION: CAMERA FACING NORTHEAST - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
VIEW OF NORTH ELEVATION; CAMERA FACING SOUTH Mare Island ...
VIEW OF NORTH ELEVATION; CAMERA FACING SOUTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
View of south elevation; camera facing northeast. Mare Island ...
View of south elevation; camera facing northeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
View of north elevation; camera facing southeast. Mare Island ...
View of north elevation; camera facing southeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Contextual view of building 733; camera facing southeast. Mare ...
Contextual view of building 733; camera facing southeast. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
Oblique view of southeast corner; camera facing northwest. Mare ...
Oblique view of southeast corner; camera facing northwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Interior view of second floor sleeping area; camera facing south. ...
Interior view of second floor sleeping area; camera facing south. - Mare Island Naval Shipyard, Marine Barracks, Cedar Avenue, west side between Twelfth & Fourteenth Streets, Vallejo, Solano County, CA
View of camera station located northeast of Building 70022, facing ...
View of camera station located northeast of Building 70022, facing northwest - Naval Ordnance Test Station Inyokern, Randsburg Wash Facility Target Test Towers, Tower Road, China Lake, Kern County, CA
Interior view of second floor lobby; camera facing south. ...
Interior view of second floor lobby; camera facing south. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior view of second floor space; camera facing southwest. ...
Interior view of second floor space; camera facing southwest. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior view of north wing, south wall offices; camera facing ...
Interior view of north wing, south wall offices; camera facing south. - Mare Island Naval Shipyard, Smithery, California Avenue, west side at California Avenue & Eighth Street, Vallejo, Solano County, CA
Contextual view of building 926 west elevation; camera facing east. ...
Contextual view of building 926 west elevation; camera facing east. - Mare Island Naval Shipyard, Wilderman Hall, Johnson Lane, north side adjacent to (south of) Hospital Complex, Vallejo, Solano County, CA
Interior view of hallway on second floor; camera facing south. ...
Interior view of hallway on second floor; camera facing south. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
Contextual view of building 733 along Cedar Avenue; camera facing ...
Contextual view of building 733 along Cedar Avenue; camera facing southwest. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
View of main terrace with mature tree, camera facing southeast ...
View of main terrace with mature tree, camera facing southeast - Naval Training Station, Senior Officers' Quarters District, Naval Station Treasure Island, Yerba Buena Island, San Francisco, San Francisco County, CA
View of steel warehouses, building 710 north sidewalk; camera facing ...
View of steel warehouses, building 710 north sidewalk; camera facing east. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Multi-color pyrometry imaging system and method of operating the same
Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde
2017-03-21
A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.
MTR STACK, TRA710, CONTEXTUAL VIEW, CAMERA FACING SOUTH. PERIMETER SECURITY ...
MTR STACK, TRA-710, CONTEXTUAL VIEW, CAMERA FACING SOUTH. PERIMETER SECURITY FENCE AND SECURITY LIGHTING IN VIEW AT LEFT. INL NEGATIVE NO. HD52-1-1. Mike Crane, Photographer, 5/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
2. View from same camera position facing 232 degrees southwest ...
2. View from same camera position facing 232 degrees southwest showing abandoned section of old grade - Oak Creek Administrative Center, One half mile east of Zion-Mount Carmel Highway at Oak Creek, Springdale, Washington County, UT
3. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY, CAMERA FACING NORTHEAST. ...
3. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY, CAMERA FACING NORTHEAST. SHOWS RELATIONSHIP BETWEEN DECONTAMINATION ROOM, ADSORBER REMOVAL HATCHES (FLAT ON GRADE), AND BRIDGE CRANE. INEEL PROOF NUMBER HD-17-2. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
Barrier Coverage for 3D Camera Sensor Networks
Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao
2017-01-01
Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167
Barrier Coverage for 3D Camera Sensor Networks.
Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao
2017-08-03
Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.
PROCESS WATER BUILDING, TRA605. CONTEXTUAL VIEW, CAMERA FACING SOUTHEAST. PROCESS ...
PROCESS WATER BUILDING, TRA-605. CONTEXTUAL VIEW, CAMERA FACING SOUTHEAST. PROCESS WATER BUILDING AND ETR STACK ARE IN LEFT HALF OF VIEW. TRA-666 IS NEAR CENTER, ABUTTED BY SECURITY BUILDING; TRA-626, AT RIGHT EDGE OF VIEW BEHIND BUS. INL NEGATIVE NO. HD46-34-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
PBF Cooling Tower contextual view. Camera facing southwest. West wing ...
PBF Cooling Tower contextual view. Camera facing southwest. West wing and north facade (rear) of Reactor Building (PER-620) is at left; Cooling Tower to right. Photographer: Kirsh. Date: November 2, 1970. INEEL negative no. 70-4913 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
1. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY. CAMERA FACING NORTHEAST. ...
1. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY. CAMERA FACING NORTHEAST. ON RIGHT OF VIEW IS PART OF EARTH/GRAVEL SHIELDING FOR BIN SET. AERIAL STRUCTURE MOUNTED ON POLES IS PNEUMATIC TRANSFER SYSTEM FOR DELIVERY OF SAMPLES BEING SENT FROM NEW WASTE CALCINING FACILITY TO THE CPP REMOTE ANALYTICAL LABORATORY. INEEL PROOF NUMBER HD-17-1. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID
The potential of low-cost RPAS for multi-view reconstruction of rock cliffs
NASA Astrophysics Data System (ADS)
Ettore Guccione, Davide; Thoeni, Klaus; Santise, Marina; Giacomini, Anna; Roncella, Riccardo; Forlani, Gianfranco
2016-04-01
RPAS, also known as drones or UAVs, have been used in military applications for many years. Nevertheless, the technology has become accessible to everyone only in recent years (Westoby et al., 2012; Nex and Remondino, 2014). Electric multirotor helicopters or multicopters have become one of the most exciting developments and several off-the-shelf platforms (including camera) are now available. In particular, RPAS can provide 3D models of sub-vertical rock faces, which for instance are needed for rockfall hazard assessments along road cuts and very steep mountains. The current work investigates the potential of two low-cost off-the-shelf quadcopters equipped with digital cameras for multi-view reconstruction of sub-vertical rock cliffs. The two platforms used are a DJI Phantom 1 (P1) equipped with a Gopro Hero 3+ (12MP) and a DJI Phantom 3 Professional (P3). The latter comes with an integrated 12MP camera mounted on a 3-axis gimbal. Both platforms cost less than 1.500€ including camera. The study area is a small rock cliff near the Callaghan Campus of the University of Newcastle (Thoeni et al., 2014). The wall is partly smooth with some evident geological features such as non-persistent joints and sharp edges. Several flights were performed with both cameras set in time-lapse mode. Hence, images were taken automatically but the flights were performed manually since the investigated rock face is very irregular which required adjusting the yaw and roll for optimal coverage since the flights were performed very close to the cliff face. The digital images were processed with a commercial SfM software package. Thereby, several processing options and camera networks were investigated in order to define the most accurate configuration. Firstly, the difference between the use of coded ground control targets versus natural features was studied. Coded targets generally provide the best accuracy but they need to be placed on the surface which is not always possible as rock cliffs are not easily accessible. Nevertheless, work natural features can provide a good alternative if chosen wisely. Secondly, the influence of using fixed interior orientation parameters and self-calibration was investigated. The results show that in the case of the used sensors and camera networks self-calibration provides better results. This can mainly be attributed to the fact that the object distance is not constant and rather small (less than 10m) and that both cameras do not provide an option for fixing the interior orientation parameters. Finally, the results of both platforms are as well compared to a point cloud obtained with a terrestrial laser scanner where generally a very good agreement is observed. References Nex, F., Remondino, F. (2014) UAV for 3D mapping applications: a review. Applied Geomatics 6(1), 1-15. Thoeni, K., Giacomini, A., Murtagh, R., Kniest, E. (2014) A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5, 573-580. Westoby, M.J., Brasington, J., Glasser, N.F., Hambrey, M.J., Reynolds, J.M. (2012) 'Structure-from-Motion' photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 179, 300-314.
LOFT. Containment building (TAN650) detail. Camera facing east. Service building ...
LOFT. Containment building (TAN-650) detail. Camera facing east. Service building corner is at left of view above personnel access. Round feature at left of dome is tank that will contain borated water. Metal stack at right of view. Date: 1973. INEEL negative no. 73-1085 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
INTERIOR VIEW OF FIRST STORY SPACE SHOWING CONCRETE BEAMS; CAMERA ...
INTERIOR VIEW OF FIRST STORY SPACE SHOWING CONCRETE BEAMS; CAMERA FACING NORTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
MTR BUILDING INTERIOR, TRA603. BASEMENT. CAMERA IN WEST CORRIDOR FACING ...
MTR BUILDING INTERIOR, TRA-603. BASEMENT. CAMERA IN WEST CORRIDOR FACING SOUTH. FREIGHT ELEVATOR IS AT RIGHT OF VIEW. AT CENTER VIEW IS MTR VAULT NO. 1, USED TO STORE SPECIAL OR FISSIONABLE MATERIALS. INL NEGATIVE NO. HD46-6-3. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
1. VIEW OF ARVFS BUNKER TAKEN FROM GROUND ELEVATION. CAMERA ...
1. VIEW OF ARVFS BUNKER TAKEN FROM GROUND ELEVATION. CAMERA FACING NORTH. VIEW SHOWS PROFILE OF BUNKER IN RELATION TO NATURAL GROUND ELEVATION. TOP OF BUNKER HAS APPROXIMATELY THREE FEET OF EARTH COVER. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
ENGINEERING TEST REACTOR (ETR) BUILDING, TRA642. CONTEXTUAL VIEW, CAMERA FACING ...
ENGINEERING TEST REACTOR (ETR) BUILDING, TRA-642. CONTEXTUAL VIEW, CAMERA FACING EAST. VERTICAL METAL SIDING. ROOF IS SLIGHTLY ELEVATED AT CENTER LINE FOR DRAINAGE. WEST SIDE OF ETR COMPRESSOR BUILDING, TRA-643, PROJECTS TOWARD LEFT AT FAR END OF ETR BUILDING. INL NEGATIVE NO. HD46-37-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...
DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL
LPT. Low power assembly and test building (TAN640). Camera facing ...
LPT. Low power assembly and test building (TAN-640). Camera facing west. Rollup doors to each test cell face east. Concrete walls poured in place. Apparatus at right of view was part of a post-ANP program. INEEL negative no. HD-40-1-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
A&M. Guard house (TAN638), contextual view. Built in 1968. Camera ...
A&M. Guard house (TAN-638), contextual view. Built in 1968. Camera faces south. Guard house controlled access to radioactive waste storage tanks beyond and to left of view. Date: February 4, 2003. INEEL negative no. HD-33-4-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
LOFT complex, camera facing west. Mobile entry (TAN624) is position ...
LOFT complex, camera facing west. Mobile entry (TAN-624) is position next to containment building (TAN-650). Shielded roadway entrance in view just below and to right of stack. Borated water tank has been covered with weather shelter and is no longer visible. ANP hangar (TAN-629) in view beyond LOFT. Date: 1974. INEEL negative no. 74-4191 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
ETR HEAT EXCHANGER BUILDING, TRA644. SOUTH SIDE. CAMERA FACING NORTH. ...
ETR HEAT EXCHANGER BUILDING, TRA-644. SOUTH SIDE. CAMERA FACING NORTH. NOTE POURED CONCRETE WALLS. ETR IS AT LEFT OF VIEW. NOTE DRIVEWAY INSET AT RIGHT FORMED BY DEMINERALIZER WING AT RIGHT. SOUTHEAST CORNER OF ETR, TRA-642, IN VIEW AT UPPER LEFT. INL NEGATIVE NO. HD46-36-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.
2014-06-01
This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.
A&M. Hot liquid waste treatment building (TAN616). Camera facing northeast. ...
A&M. Hot liquid waste treatment building (TAN-616). Camera facing northeast. South wall with oblique views of west sides of structure. Photographer: Ron Paarmann. Date: September 22, 1997. INEEL negative no. HD-20-1-2 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Why are faces denser in the visual experiences of younger than older infants?
Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.
2017-01-01
Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants declines over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested two opposing hypotheses about this observed age-related decline in the frequency of faces in infant views. By the People-input hypothesis, there are more faces in view for younger infants because people are more often physically in front of younger than older infants. This hypothesis predicts that not just faces but views of other body parts will decline with age. By the Face-input hypothesis, the decline is strictly about faces, not people or other body parts in general. Two experiments, one using a time-sampling method (84 infants 3 to 24 months in age) and the other analyses of head camera images (36 infants 1 to 24 months) provide strong support for the Face-input hypothesis. The results suggest developmental constraints on the environment that ensure faces are prevalent early in development. PMID:28026190
Late afternoon view of the interior of the westernmost wall ...
Late afternoon view of the interior of the westernmost wall section to be removed; camera facing north. (Note: lowered camera position significantly to minimize background distractions including the porta-john, building, and telephone pole) - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
IET. Aerial view of project, 95 percent complete. Camera facing ...
IET. Aerial view of project, 95 percent complete. Camera facing east. Left to right: stack, duct, mobile test cell building (TAN-624), four-rail track, dolly. Retaining wall between mobile test building and shielded control building (TAN-620) just beyond. North of control building are tank building (TAN-627) and fuel-transfer pump building (TAN-625). Guard house at upper right along exclusion fence. Construction vehicles and temporary warehouse in view near guard house. Date: June 6, 1955. INEEL negative no. 55-1462 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
ETR COMPLEX. CAMERA FACING SOUTH. FROM BOTTOM OF VIEW TO ...
ETR COMPLEX. CAMERA FACING SOUTH. FROM BOTTOM OF VIEW TO TOP: MTR, MTR SERVICE BUILDING, ETR CRITICAL FACILITY, ETR CONTROL BUILDING (ATTACHED TO ETR), ETR BUILDING (HIGH-BAY), COMPRESSOR BUILDING (ATTACHED AT LEFT OF ETR), HEAT EXCHANGER BUILDING (JUST BEYOND COMPRESSOR BUILDING), COOLING TOWER PUMP HOUSE, COOLING TOWER. OTHER BUILDINGS ARE CONTRACTORS' CONSTRUCTION BUILDINGS. INL NEGATIVE NO. 56-4105. Unknown Photographer, ca. 1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
LOFT. Interior view of entry (TAN624) rollup door. Camera is ...
LOFT. Interior view of entry (TAN-624) rollup door. Camera is inside entry building facing south. Rollup door was a modification of the original ANP door arrangement. Date: March 2004. INEEL negative no. HD-39-5-2 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Recent advances in multiview distributed video coding
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj
2007-04-01
We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.
ADM. Change House (TAN606) as completed. Camera facing northerly. Note ...
ADM. Change House (TAN-606) as completed. Camera facing northerly. Note proximity to shielding berm. Part of hot shop (A&M Building, TAN-607) at left of view beyond berm. Date: October 29, 1954. INEEL negative no. 12705 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
LPT. Low power test control building (TAN641) interior. Camera facing ...
LPT. Low power test control building (TAN-641) interior. Camera facing northeast at what remains of control room console. Cut in wall at right of view shows west wall of northern test cell. INEEL negative no. HD-40-4-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Morning view, brick post detail; view also shows dimensional wallconstruction ...
Morning view, brick post detail; view also shows dimensional wall-construction detail. North wall, with the camera facing northwest. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
Morning view, contextual view showing the road and gate to ...
Morning view, contextual view showing the road and gate to be widened; view taken from the statue area with the camera facing north. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
DOT National Transportation Integrated Search
2004-10-01
The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...
PBF Reactor Building (PER620). Aerial view of early construction. Camera ...
PBF Reactor Building (PER-620). Aerial view of early construction. Camera facing northwest. Excavation and concrete placement in two basements are underway. Note exposed lava rock. Photographer: Farmer. Date: March 22, 1965. INEEL negative no. 65-2219 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
26. VIEW OF METAL SHED OVER SHIELDING TANK WITH CAMERA ...
26. VIEW OF METAL SHED OVER SHIELDING TANK WITH CAMERA FACING SOUTHWEST. SHOWS OPEN SIDE OF SHED ROOF, HERCULON SHEET, AND HAND-OPERATED CRANE. TAKEN IN 1983. INEL PHOTO NUMBER 83-476-2-9, TAKEN IN 1983. PHOTOGRAPHER NOT NAMED. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
44. ARAIII Fuel oil tank ARA710. Camera facing west. Perimeter ...
44. ARA-III Fuel oil tank ARA-710. Camera facing west. Perimeter fence at left side of view. Gable-roofed building beyond tank on right is ARA-622. Gable-roofed building beyond tank on left is ARA-610. Ineel photo no. 3-16. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
MTR WING A, TRA604. SOUTH SIDE. CAMERA FACING NORTH. THIS ...
MTR WING A, TRA-604. SOUTH SIDE. CAMERA FACING NORTH. THIS VIEW TYPIFIES TENDENCY FOR EXPANSIONS TO TAKE THE FORM OF PROJECTIONS AND INFILL USING AVAILABLE YARD SPACES. INL NEGATIVE NO. HD47-44-3. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Morning view, contextual view showing unpaved corridor down the westernmost ...
Morning view, contextual view showing unpaved corridor down the westernmost lane where the wall section (E) will be removed; camera facing north-northwest. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Validation of Viewing Reports: Exploration of a Photographic Method.
ERIC Educational Resources Information Center
Fletcher, James E.; Chen, Charles Chao-Ping
A time lapse camera loaded with Super 8 film was employed to photographically record the area in front of a conventional television receiver in selected homes. The camera took one picture each minute for three days, including in the same frame the face of the television receiver. Family members kept a conventional viewing diary of their viewing…
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
ETR HEAT EXCHANGER BUILDING, TRA644. EAST SIDE. CAMERA FACING WEST. ...
ETR HEAT EXCHANGER BUILDING, TRA-644. EAST SIDE. CAMERA FACING WEST. NOTE COURSE OF PIPE FROM GROUND AND FOLLOWING ROOF OF BUILDING. MTR BUILDING IN BACKGROUND AT RIGHT EDGE OF VIEW. INL NEGATIVE NO. HD46-36-3. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
A&M. Hot liquid waste treatment building (TAN616). Camera facing southwest. ...
A&M. Hot liquid waste treatment building (TAN-616). Camera facing southwest. Oblique view of east and north walls. Note three corrugated pipes at lower left indicating location of underground hot waste storage tanks. Photographer: Ron Paarmann. Date: September 22, 1997. INEEL negative no. HD-20-1-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
PBF (PER620) interior of Reactor Room. Camera facing south from ...
PBF (PER-620) interior of Reactor Room. Camera facing south from stairway platform in southwest corner (similar to platform in view at left). Reactor was beneath water in circular tank. Fuel was stored in the canal north of it. Platform and apparatus at right is reactor bridge with control rod mechanisms and actuators. The entire apparatus swung over the reactor and pool during operations. Personnel in view are involved with decontamination and preparation of facility for demolition. Note rails near ceiling for crane; motor for rollup door at upper center of view. Date: March 2004. INEEL negative no. HD-41-3-2 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
HOT CELL BUILDING, TRA632. CONTEXTUAL VIEW ALONG WALLEYE AVENUE, CAMERA ...
HOT CELL BUILDING, TRA-632. CONTEXTUAL VIEW ALONG WALLEYE AVENUE, CAMERA FACING EASTERLY. HOT CELL BUILDING IS AT CENTER LEFT OF VIEW; THE LOW-BAY PROJECTION WITH LADDER IS THE TEST TRAIN ASSEMBLY FACILITY, ADDED IN 1968. MTR BUILDING IS IN LEFT OF VIEW. HIGH-BAY BUILDING AT RIGHT IS THE ENGINEERING TEST REACTOR BUILDING, TRA-642. INL NEGATIVE NO. HD46-32-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
REACTOR SERVICE BUILDING, TRA635, CONTEXTUAL VIEW DURING CONSTRUCTION. CAMERA IS ...
REACTOR SERVICE BUILDING, TRA-635, CONTEXTUAL VIEW DURING CONSTRUCTION. CAMERA IS ATOP MTR BUILDING AND LOOKING SOUTHERLY. FOUNDATION AND DRAINS ARE UNDER CONSTRUCTION. THE BUILDING WILL BUTT AGAINST CHARGING FACE OF PLUG STORAGE BUILDING. HOT CELL BUILDING, TRA-632, IS UNDER CONSTRUCTION AT TOP CENTER OF VIEW. INL NEGATIVE NO. 8518. Unknown Photographer, 8/25/1953 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
ETR CRITICAL FACILITY, TRA654. CONTEXTUAL VIEW. CAMERA ON ROOF OF ...
ETR CRITICAL FACILITY, TRA-654. CONTEXTUAL VIEW. CAMERA ON ROOF OF MTR BUILDING AND FACING SOUTH. ETR AND ITS COOLANT BUILDING AT UPPER PART OF VIEW. ETR COOLING TOWER NEAR TOP EDGE OF VIEW. EXCAVATION AT CENTER IS FOR ETR CF. CENTER OF WHICH WILL CONTAIN POOL FOR REACTOR. NOTE CHOPPER TUBE PROCEEDING FROM MTR IN LOWER LEFT OF VIEW, DIAGONAL TOWARD LEFT. INL NEGATIVE NO. 56-4227. Jack L. Anderson, Photographer, 12/18/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Power Burst Facility (PBF), PER620, contextual and oblique view. Camera ...
Power Burst Facility (PBF), PER-620, contextual and oblique view. Camera facing northwest. South and east facade. The 1980 west-wing expansion is left of center bay. Concrete structure at right is PER-730. Date: March 2004. INEEL negative no. HD-41-2-3 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation
NASA Astrophysics Data System (ADS)
Inamoto, Naho; Saito, Hideo
2003-06-01
This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..
Morning view, contextual view of the exterior west side of ...
Morning view, contextual view of the exterior west side of the north wall along the unpaved road; camera facing west, positioned in road approximately 8 posts west of the gate. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
Contextual view showing northeastern eucalyptus windbreak and portion of citrus ...
Contextual view showing northeastern eucalyptus windbreak and portion of citrus orchard. Camera facing 118" east-southeast. - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA
Automated face detection for occurrence and occupancy estimation in chimpanzees.
Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S
2017-03-01
Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Nuclear medicine imaging system
Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George
1986-01-07
A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.
Nuclear medicine imaging system
Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George
1986-01-01
A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.
LOFT. Interior view of entry to reactor building, TAN650. Camera ...
LOFT. Interior view of entry to reactor building, TAN-650. Camera is inside entry (TAN-624) and facing north. At far end of domed chamber are penetrations in wall for electrical and other connections. Reactor and other equipment has been removed. Date: March 2004. INEEL negative no. HD-39-5-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
LOFT complex in 1975 awaits renewed mission. Aerial view. Camera ...
LOFT complex in 1975 awaits renewed mission. Aerial view. Camera facing southwesterly. Left to right: stack, entry building (TAN-624), door shroud, duct shroud and filter hatches, dome (painted white), pre-amp building, equipment and piping building, shielded control room (TAN-630), airplane hangar (TAN-629). Date: 1975. INEEL negative no. 75-3690 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Contextual view of building, with building #11 in right foreground. ...
Contextual view of building, with building #11 in right foreground. Camera facing east - Naval Supply Center, Broadway Complex, Administration Storehouse, 911 West Broadway, San Diego, San Diego County, CA
14. VIEW OF MST, FACING SOUTHEAST, AND LAUNCH PAD TAKEN ...
14. VIEW OF MST, FACING SOUTHEAST, AND LAUNCH PAD TAKEN FROM NORTHEAST PHOTO TOWER WITH WINDOW OPEN. FEATURES LEFT TO RIGHT: SOUTH TELEVISION CAMERA TOWER, SOUTHWEST PHOTO TOWER, LAUNCHER, UMBILICAL MAST, MST, AND OXIDIZER APRON. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Face recognition system for set-top box-based intelligent TV.
Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung
2014-11-18
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.
Parting Moon Shots from NASAs GRAIL Mission
2013-01-10
Video of the moon taken by the NASA GRAIL mission's MoonKam (Moon Knowledge Acquired by Middle School Students) camera aboard the Ebb spacecraft on Dec. 14, 2012. Features forward-facing and rear-facing views.
Contextual view showing drainage culvert in foreground boarding east side ...
Contextual view showing drainage culvert in foreground boarding east side of knoll with eucalyptus windbreak. Camera facing 278" southwest. - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
MTR BUILDING, TRA603. SOUTHEAST CORNER, EAST SIDE FACING TOWARD RIGHT ...
MTR BUILDING, TRA-603. SOUTHEAST CORNER, EAST SIDE FACING TOWARD RIGHT OF VIEW. CAMERA FACING NORTHWEST. LIGHT-COLORED PROJECTION AT LEFT IS ENGINEERING SERVICES BUILDING, TRA-635. SMALL CONCRETE BLOCK BUILDING AT CENTER OF VIEW IS FAST CHOPPER DETECTOR HOUSE, TRA-665. INL NEGATIVE NO. HD46-43-3. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
ETR COMPLEX. CAMERA FACING EAST. FROM LEFT TO RIGHT: ETRCRITICAL ...
ETR COMPLEX. CAMERA FACING EAST. FROM LEFT TO RIGHT: ETR-CRITICAL FACILITY BUILDING, ETR CONTROL BUILDING (ATTACHED TO HIGH-BAY ETR), ETR, ONE-STORY SECTION OF ETR BUILDING, ELECTRICAL BUILDING, COOLING TOWER PUMP HOUSE, COOLING TOWER. COMPRESSOR AND HEAT EXCHANGER BUILDING ARE PARTLY IN VIEW ABOVE ETR. DARK-COLORED DUCTS PROCEED FROM GROUND CONNECTION TO ETR WASTE GAS STACK. OTHER STACK IS MTR STACK WITH FAN HOUSE IN FRONT OF IT. RECTANGULAR STRUCTURE NEAR TOP OF VIEW IS SETTLING BASIN. INL NEGATIVE NO. 56-4102. Unknown Photographer, ca. 1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
ETR, TRA642. ETR COMPLEX NEARLY COMPLETE. CAMERA FACES NORTHWEST, PROBABLY ...
ETR, TRA-642. ETR COMPLEX NEARLY COMPLETE. CAMERA FACES NORTHWEST, PROBABLY FROM TOP DECK OF COOLING TOWER. SHADOW IS CAST BY COOLING TOWER UNITS OFF LEFT OF VIEW. HIGH-BAY REACTOR BUILDING IS SURROUNDED BY ITS ATTACHED SERVICES: ELECTRICAL (TRA-648), HEAT EXCHANGER (TRA-644 WITH U-SHAPED YARD), AND COMPRESSOR (TRA-643). THE CONTROL BUILDING (TRA-647) ON THE NORTH SIDE IS HIDDEN FROM VIEW. AT UPPER RIGHT IS MTR BUILDING, TRA-603. INL NEGATIVE NO. 56-3798. Jack L. Anderson, Photographer, 11/26/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Morning view, contextual view showing the role of the brick ...
Morning view, contextual view showing the role of the brick walls along the boundary of the cemetery; interior view taken from midway down the paved west road with the camera facing west to capture the morning light on the west wall. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
ERIC Educational Resources Information Center
Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.
2017-01-01
Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…
IET. Aerial view of snaptran destructive experiment in 1964. Camera ...
IET. Aerial view of snaptran destructive experiment in 1964. Camera facing north. Test cell building (TAN-624) is positioned away from coupling station. Weather tower in right foreground. Divided duct just beyond coupling station. Air intake structure on south side of shielded control room. Experiment is on dolly at coupling station. Date: 1964. INEEL negative no. 64-1736 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Late afternoon view of the interior of the westcentral wall ...
Late afternoon view of the interior of the west-central wall section to be removed; camera facing north. Gravestones in the foreground. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
Interior view showing split levels with buildings 87 windows in ...
Interior view showing split levels with buildings 87 windows in distance; camera facing west. - Mare Island Naval Shipyard, Mechanics Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
Interior view of typical room on second floor, west side; ...
Interior view of typical room on second floor, west side; camera facing north. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
Overview in two parts: Right view showing orchard path on ...
Overview in two parts: Right view showing orchard path on left eucalyptus windbreak bordering knoll on right. Camera facing 278" west. - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA
View of Chapel Park, showing bomb shelters at right foreground, ...
View of Chapel Park, showing bomb shelters at right foreground, from building 746 parking lot across Walnut Avenue; camera facing north. - Mare Island Naval Shipyard, East of Nave Drive, Vallejo, Solano County, CA
Contextual view of building 505 showing west elevation from marsh; ...
Contextual view of building 505 showing west elevation from marsh; camera facing east. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
View of steel warehouses (building 710 second in on right); ...
View of steel warehouses (building 710 second in on right); camera facing south. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA
View of steel warehouses (building 710 second in on left); ...
View of steel warehouses (building 710 second in on left); camera facing west. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA
Interior view of main entry on south elevation, showing railroad ...
Interior view of main entry on south elevation, showing railroad tracks; camera facing south. - Mare Island Naval Shipyard, Boiler Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
Interior view of main entry on south elevation, showing railroad ...
Interior view of main entry on south elevation, showing railroad tracks; camera facing south. - Mare Island Naval Shipyard, Machine Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
3. VIEW OF ARVFS BUNKER TAKEN FROM APPROXIMATELY 150 FEET ...
3. VIEW OF ARVFS BUNKER TAKEN FROM APPROXIMATELY 150 FEET EAST OF BUNKER DOOR. CAMERA FACING WEST. VIEW SHOWS EARTH MOUND COVERING CONTROL BUNKER AND REMAINS OF CABLE CHASE. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
Have a Nice Spring! MOC Revisits "Happy Face" Crater
2005-05-16
Smile! Spring has sprung in the martian southern hemisphere. With it comes the annual retreat of the winter polar frost cap. This view of "Happy Face Crater"--officially named "Galle Crater"--shows patches of white water ice frost in and around the crater's south-facing slopes. Slopes that face south will retain frost longer than north-facing slopes because they do not receive as much sunlight in early spring. This picture is a composite of images taken by the Mars Global Surveyor Mars Orbiter Camera (MOC) red and blue wide angle cameras. The wide angle cameras were designed to monitor the changing weather, frost, and wind patterns on Mars. Galle Crater is located on the east rim of the Argyre Basin and is about 215 kilometers (134 miles) across. In this picture, illumination is from the upper left and north is up. http://photojournal.jpl.nasa.gov/catalog/PIA02325
3D Face Modeling Using the Multi-Deformable Method
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-01-01
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
Morning view of the exterior of the westernmost wall section ...
Morning view of the exterior of the westernmost wall section to be removed; camera facing south. Unpaved road in foreground; tree canopy in background. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
INTERIOR VIEW OF SECOND STORY SPACE LOOKING TOWARD SECOND FLOOR ...
INTERIOR VIEW OF SECOND STORY SPACE LOOKING TOWARD SECOND FLOOR DOORS; CAMERA FACING NORTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
INTERIOR VIEW OF SECOND STORY SPACE, NORTH END OF BUILDING; ...
INTERIOR VIEW OF SECOND STORY SPACE, NORTH END OF BUILDING; CAMERA FACING SOUTHEAST. - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
CONTEXTUAL VIEW OF BUILDING 231 SHOWING WEST AND SOUTH ELEVATIONS; ...
CONTEXTUAL VIEW OF BUILDING 231 SHOWING WEST AND SOUTH ELEVATIONS; CAMERA FACING NORTHEAST. - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
CONTEXTUAL VIEW OF BUILDING 231 SHOWING EAST AND NORTH ELEVATIONS; ...
CONTEXTUAL VIEW OF BUILDING 231 SHOWING EAST AND NORTH ELEVATIONS; CAMERA FACING SOUTHWEST. - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
Contextual view of summer kitchen, showing blacksmith shop downhill at ...
Contextual view of summer kitchen, showing blacksmith shop downhill at right and cottage at center (between the trees); camera facing northeast - Lemmon-Anderson-Hixson Ranch, Summer Kitchen, 11220 North Virginia Street, Reno, Washoe County, NV
Contextual view of the rear of building 926 from the ...
Contextual view of the rear of building 926 from the hillside; camera facing east. - Mare Island Naval Shipyard, Wilderman Hall, Johnson Lane, north side adjacent to (south of) Hospital Complex, Vallejo, Solano County, CA
Interior view of typical ward on second floor, south wing; ...
Interior view of typical ward on second floor, south wing; camera facing northwest. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior view of first floor lobby with detail of columns; ...
Interior view of first floor lobby with detail of columns; camera facing north. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Contextual view of Goerlitz Property, showing eucalyptus trees along west ...
Contextual view of Goerlitz Property, showing eucalyptus trees along west side of driveway; parking lot and utility pole in foreground. Camera facing 38" northeast - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA
View of main terrace retaining wall with mature tree on ...
View of main terrace retaining wall with mature tree on left center, camera facing southeast - Naval Training Station, Senior Officers' Quarters District, Naval Station Treasure Island, Yerba Buena Island, San Francisco, San Francisco County, CA
Contextual view of building 505 Cedar avenue, showing south and ...
Contextual view of building 505 Cedar avenue, showing south and east elevations; camera facing northwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
View of steel warehouses at Gilmore Avenue (building 710 second ...
View of steel warehouses at Gilmore Avenue (building 710 second in on left); camera facing east. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA
View of steel warehouses on Ellsberg Drive, building 710 full ...
View of steel warehouses on Ellsberg Drive, building 710 full building at center; camera facing southeast. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA
View of steel warehouses (from left: building 807, 808, 809, ...
View of steel warehouses (from left: building 807, 808, 809, 810, 811); camera facing east. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA
Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo
NASA Astrophysics Data System (ADS)
Daily, David; Kiser, Jillian; McQueen, Sarah
2016-11-01
Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.
Interior view of west main room in original tworoom portion. ...
Interior view of west main room in original two-room portion. Note muslin ceiling temporarily tacked up by the HABS team to afford clearer view. Camera facing west. - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA
Contextual view of Warner's Ranch. Third of three sequential views ...
Contextual view of Warner's Ranch. Third of three sequential views (from west to east) of the buildings in relation to the surrounding geography. Note approximate location of Overland Trail crossing left to right. Camera facing northeast - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA
EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION ...
EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION FOR MTR CANAL. CAISSONS FLANK EACH SIDE. COUNTERFORT (SUPPORT PERPENDICULAR TO WHAT WILL BE THE LONG WALL OF THE CANAL) RESTS ATOP LEFT CAISSON. IN LOWER PART OF VIEW, DRILLERS PREPARE TRENCHES FOR SUPPORT BEAMS THAT WILL LIE BENEATH CANAL FLOOR. INL NEGATIVE NO. 739. Unknown Photographer, 10/6/1950 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
A 3D camera for improved facial recognition
NASA Astrophysics Data System (ADS)
Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim
2004-12-01
We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.
ADM. Aerial view of administration area. Camera facing westerly. From ...
ADM. Aerial view of administration area. Camera facing westerly. From left to right in foregound: Substation (TAN-605), Warehouse (TAN-628), Gate House (TAN-601), Administration Building (TAN-602). Left to right middle ground: Service Building (TAN-603), Warehouse (later known as Maintenance Shop or Craft Shop, TAN-604), Water Well Pump Houses, Fuel Tanks and Fuel Pump Houses, and Water Storage Tanks. Change House (TAN-606) on near side of berm. Large building beyond berm is A&M. Building, TAN-607. Railroad tracks beyond lead from (unseen) turntable to the IET. Date: June 6, 1955. INEEL negative no. 13201 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Morning view of the exterior of the gate and white ...
Morning view of the exterior of the gate and white posts to be reworked/widened; camera facing south looking into the cemetery toward the statue. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
INTERIOR VIEW OF FIRST FLOOR SPACE AT NORTH END, LOOKING ...
INTERIOR VIEW OF FIRST FLOOR SPACE AT NORTH END, LOOKING AT WEST WALL; CAMERA FACING NORTHWEST. - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA
Contextual view of building, with building #12 in right background ...
Contextual view of building, with building #12 in right background and building #11 in right foreground. Camera facing east-southeast - Naval Supply Center, Broadway Complex, Administration Storehouse, 911 West Broadway, San Diego, San Diego County, CA
Contextual view of building H70 showing southeast and northeast elevations; ...
Contextual view of building H70 showing southeast and northeast elevations; camera facing west. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior view of office with fireplace on second floor off ...
Interior view of office with fireplace on second floor off south lobby; camera facing southeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior detail view showing worn threshold in doorway between kitchen ...
Interior detail view showing worn threshold in doorway between kitchen and west room in north addition. Camera facing west. - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA
ETR, TRA642, CAMERA IS BELOW, BUT NEAR THE CEILING OF ...
ETR, TRA-642, CAMERA IS BELOW, BUT NEAR THE CEILING OF THE GROUND FLOOR, AND LOOKS DOWN TOWARD THE CONSOLE FLOOR. CAMERA FACES WESTERLY. THE REACTOR PIT IS IN THE CENTER OF THE VIEW. BEYOND IT TO THE LEFT IS THE SOUTH SIDE OF THE WORKING CANAL. IN THE FOREGROUND ON THE RIGHT IS THE SHIELDING FOR THE PROCESS WATER TUNNEL AND PIPING. SPIRAL STAIRCASE AT LEFT OF VIEW. INL NEGATIVE NO. 56-2237. Jack L. Anderson, Photographer, 7/6/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
2010-04-30
NASA Mars Exploration Rover Opportunity used its panoramic camera Pancam to capture this view approximately true-color view of the rim of Endeavour crater, the rover destination in a multi-year traverse along the sandy Martian landscape.
SPERTI Terminal Building (PER604) with view into interior. Storage tanks ...
SPERT-I Terminal Building (PER-604) with view into interior. Storage tanks and equipment in view. Camera facing west. Photographer: R.G. Larsen. Date: May 20, 1955. INEEL negative no. 55-1291 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
4. INTERIOR VIEW OF CLUB HOUSE REFRIGERATION UNIT, SHOWING COOLING ...
4. INTERIOR VIEW OF CLUB HOUSE REFRIGERATION UNIT, SHOWING COOLING COILS AND CORK-LINED ROOM. CAMERA IS BETWEEN SEVEN AND EIGHT FEET ABOVE FLOOR LEVEL, FACING SOUTHEAST. - Swan Falls Village, Clubhouse 011, Snake River, Kuna, Ada County, ID
Contextual view showing H1 on left and H270 in background; ...
Contextual view showing H1 on left and H270 in background; camera facing north. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
View looking across to building H1 from third floor porch ...
View looking across to building H1 from third floor porch over entrance; camera facing south. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Contextual view showing building 926 north wing at left and ...
Contextual view showing building 926 north wing at left and hospital historic district at right; camera facing north. - Mare Island Naval Shipyard, Wilderman Hall, Johnson Lane, north side adjacent to (south of) Hospital Complex, Vallejo, Solano County, CA
Contextual view looking down clubhouse drive. Showing west elevation of ...
Contextual view looking down clubhouse drive. Showing west elevation of H1 on right; camera facing east. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
104. ARAIII. Interior view of room 110 in ARA607 used ...
104. ARA-III. Interior view of room 110 in ARA-607 used as data acquisition control room. Camera facing northeast. Ineel photo no. 81-103. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
A Gradient Optimization Approach to Adaptive Multi-Robot Control
2009-09-01
implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul
PBF Reactor Building (PER620). In subpile room, camera faces southeast ...
PBF Reactor Building (PER-620). In sub-pile room, camera faces southeast and looks up toward bottom of reactor vessel. Upper assembly in center of view is in-pile tube as it connects to vessel. Lower lateral constraints and rotating control cable are in position. Other connections have been bolted together. Note light bulbs for scale. Photographer: John Capek. Date: August 21, 1970. INEEL negative no. 70-3494 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
DEMINERALIZER BUILDING,TRA608. CAMERA FACES EAST ALONG SOUTH WALL. INSTRUMENT PANEL ...
DEMINERALIZER BUILDING,TRA-608. CAMERA FACES EAST ALONG SOUTH WALL. INSTRUMENT PANEL BOARD IS IN RIGHT HALF OF VIEW, WITH FOUR PUMPS BEYOND. SMALLER PUMPS FILL DEMINERALIZED WATER TANK ON SOUTH SIDE OF BUILDING. CARD IN LOWER RIGHT WAS INSERTED BY INL PHOTOGRAPHER TO COVER AN OBSOLETE SECURITY RESTRICTION PRINTED ON ORIGINAL NEGATIVE. INL NEGATIVE NO. 3997A. Unknown Photographer, 12/28/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
HOT CELL BUILDING, TRA632. CONTEXTUAL AERIAL VIEW OF HOT CELL ...
HOT CELL BUILDING, TRA-632. CONTEXTUAL AERIAL VIEW OF HOT CELL BUILDING, IN VIEW AT LEFT, AS YET WITHOUT ROOF. PLUG STORAGE BUILDING LIES BETWEEN IT AND THE SOUTH SIDE OF THE MTR BUILDING AND ITS WING. NOTE CONCRETE DRIVE BETWEEN ROLL-UP DOOR IN MTR BUILDING AND CHARGING FACE OF PLUG STORAGE. REACTOR SERVICES BUILDING (TRA-635) WILL COVER THIS DRIVE AND BUTT UP TO CHARGING FACE. DOTTED LINE IS ON ORIGINAL NEGATIVE. TRA PARKING LOT IN LEFT CORNER OF THE VIEW. CAMERA FACING NORTHWESTERLY. INL NEGATIVE NO. 8274. Unknown Photographer, 7/2/1953 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
A Framework for People Re-Identification in Multi-Camera Surveillance Systems
ERIC Educational Resources Information Center
Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud
2017-01-01
People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…
NASA Astrophysics Data System (ADS)
Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu
2014-09-01
The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.
Late afternoon view of the interior of the eastcentral wall ...
Late afternoon view of the interior of the east-central wall section to be removed; camera facing north. Stubby crape myrtle in front of wall. Metal Quonset hut in background. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC
Contextual view of Treasure Island showing Palace of Fine and ...
Contextual view of Treasure Island showing Palace of Fine and Decorative Arts (building 3) at right,and Port of the Trade Winds is in foreground, camera facing north - Golden Gate International Exposition, Treasure Island, San Francisco, San Francisco County, CA
Contextual view showing west elevations of building H81 on right ...
Contextual view showing west elevations of building H81 on right and H1 in middle; camera facing northeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
PBF (PER620) interior. Detail view of actuator platform and control ...
PBF (PER-620) interior. Detail view of actuator platform and control rod mechanism. Camera facing easterly from floor level. Reactor pool at lower left of view. Date: March 2004. INEEL negative no. HD-41-3-3 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
HOT CELL BUILDING, TRA632, INTERIOR. CELL 3, "HEAVY" CELL. CAMERA ...
HOT CELL BUILDING, TRA-632, INTERIOR. CELL 3, "HEAVY" CELL. CAMERA FACES WEST TOWARD BUILDING EXIT. OBSERVATION WINDOW AT LEFT EDGE OF VIEW. INL NEGATIVE NO. HD46-28-4. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K
2014-07-01
We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Contextual view of Warner's Ranch. Second of three sequential views ...
Contextual view of Warner's Ranch. Second of three sequential views (from west to east) of the buildings in relation to the surrounding geography. Ranch house and trading post/barn on left. Note approximate location of Overland Trail crossing left to right. Camera facing north. - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA
20. VIEW OF TEST FACILITY IN 1967 WHEN EQUIPPED FOR ...
20. VIEW OF TEST FACILITY IN 1967 WHEN EQUIPPED FOR DOSIMETER TEST BY HEALTH PHYSICISTS. CAMERA FACING EAST. INEL PHOTO NUMBER 76-2853, TAKEN MAY 16, 1967. PHOTOGRAPHER: CAPEK. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
Contextual view of Fyffe Avenue and Boone Drive. Dispensary (Naval ...
Contextual view of Fyffe Avenue and Boone Drive. Dispensary (Naval Medical Center Oakland and Dental Clinic San Francisco Branch Clinics, Building no. 417) is shown at left. Camera facing northwest. - Naval Supply Annex Stockton, Rough & Ready Island, Stockton, San Joaquin County, CA
Contextual view of Fyffe Avenue and Boone Drive. Dispensary (Naval ...
Contextual view of Fyffe Avenue and Boone Drive. Dispensary (Naval Medical Center Oakland and Dental Clinic San Francisco Branch Clinics, building no. 417) is shown at the center. Camera facing northeast. - Naval Supply Annex Stockton, Rough & Ready Island, Stockton, San Joaquin County, CA
SPERTI, Instrument Cell Building (PER606). Oblique view of north and ...
SPERT-I, Instrument Cell Building (PER-606). Oblique view of north and east facades. Camera facing southwest. Date: August 2003. INEEL negative no. HD-35-4-1 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
6. CONSTRUCTION PROGRESS VIEW (EXTERIOR) OF TANK, CABLE CHASE, AND ...
6. CONSTRUCTION PROGRESS VIEW (EXTERIOR) OF TANK, CABLE CHASE, AND MOUNDED BUNKER. CONSTRUCTION WAS 99 PERCENT COMPLETE. CAMERA IS FACING WEST. INEL PHOTO NUMBER 65-5435, TAKEN OCTOBER 20, 1965. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
Contextual view showing building H70 at left with building H81 ...
Contextual view showing building H70 at left with building H81 at right in background; camera facing northeast. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
PBF Cooling Tower detail. Camera facing southwest into north side ...
PBF Cooling Tower detail. Camera facing southwest into north side of Tower. Five horizontal layers of splash bars constitute fill decks, which will break up falling water into droplets, promoting evaporative cooling. Louvered faces, through which air enters tower, are on east and west sides. Louvers have been installed. Support framework for one of two venturi-shaped fan stacks (or "vents") is in center top. Orifices in hot basins (not in view) will distribute water over fill. Photographer: Kirsh. Date: May 15, 1969. INEEL negative no. 69-3032 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran
2006-10-01
As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To solve for the former, IVR (intermediate view reconstruction) is employed for smooth transition between two stereoscopic view sequences. As well, a disparity adjustment scheme is used for the latter. Finally, from the implementation of testbed and the experiments, we can show the valuables and possibilities of our system.
Atmospheric Science Data Center
2014-05-15
... the Multi-angle Imaging SpectroRadiometer (MISR). On the left, a natural-color view acquired by MISR's vertical-viewing (nadir) camera ... Gunnison River at the city of Grand Junction. The striking "L" shaped feature in the lower image center is a sandstone monocline known as ...
Virtual viewpoint synthesis in multi-view video system
NASA Astrophysics Data System (ADS)
Li, Fang; Yang, Shiqiang
2005-07-01
In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.
A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco
2014-01-01
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. PMID:24854209
A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.
Chaaraoui, Alexandros Andre; Padilla-López, José Ramón; Ferrández-Pastor, Francisco Javier; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco
2014-05-20
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
2. EXTERIOR VIEW OF DOWNSTREAM SIDE OF COTTAGE 191 TAKEN ...
2. EXTERIOR VIEW OF DOWNSTREAM SIDE OF COTTAGE 191 TAKEN FROM ROOF OF GARAGE 393. CAMERA FACING SOUTHEAST. COTTAGE 181 AND CHILDREN'S PLAY AREA VISIBLE ON EITHER SIDE OF ROOF. GRAPE ARBOR IN FOREGROUND. - Swan Falls Village, Cottage 191, Snake River, Kuna, Ada County, ID
Contextual view of the Hall of Transportation from Yerba Buena ...
Contextual view of the Hall of Transportation from Yerba Buena Island, showing Palace of Fine and Decorative Arts (Building 3) at far right, camera facing northwest - Golden Gate International Exposition, Hall of Transportation, 440 California Avenue, Treasure Island, San Francisco, San Francisco County, CA
Mastcam Telephoto of a Martian Dune Downwind Face
2016-01-04
This view combines multiple images from the telephoto-lens camera of the Mast Camera (Mastcam) on NASA's Curiosity Mars rover to reveal fine details of the downwind face of "Namib Dune." The site is part of the dark-sand "Bagnold Dunes" field along the northwestern flank of Mount Sharp. Images taken from orbit have shown that dunes in the Bagnold field move as much as about 3 feet (1 meter) per Earth year. Sand on this face of Namib Dune has cascaded down a slope of about 26 to 28 degrees. The top of the face is about 13 to 17 feet (4 to 5 meters) above the rocky ground at its base. http://photojournal.jpl.nasa.gov/catalog/PIA20283
HEAT EXCHANGER BUILDING, TRA644. NORTHEAST CORNER. CAMERA IS ON PIKE ...
HEAT EXCHANGER BUILDING, TRA-644. NORTHEAST CORNER. CAMERA IS ON PIKE STREET FACING SOUTHWEST. ATTACHED STRUCTURE AT RIGHT OF VIEW IS ETR COMPRESSOR BUILDING, TRA-643. INL NEGATIVE NO. HD46-36-4. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
Contextual view of Warner's Ranch. First of three sequential views ...
Contextual view of Warner's Ranch. First of three sequential views (from west to east) of the buildings in relation to the surrounding geography. Ranch House on right. Note approximate locations of Overland Trail on right and San Diego cutoff branching off to left. Camera facing northwest. - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA
PBF Cooling Tower. View from highbay roof of Reactor Building ...
PBF Cooling Tower. View from high-bay roof of Reactor Building (PER-620). Camera faces northwest. East louvered face has been installed. Inlet pipes protrude from fan deck. Two redwood vents under construction at top. Note piping, control, and power lines at sub-grade level in trench leading to Reactor Building. Photographer: Kirsh. Date: June 6, 1969. INEEL negative no. 69-3466 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
4. CONSTRUCTION PROGRESS VIEW OF EQUIPMENT IN FRONT PART OF ...
4. CONSTRUCTION PROGRESS VIEW OF EQUIPMENT IN FRONT PART OF CONTROL BUNKER (TRANSFORMER, HYDRAULIC TANK, PUMP, MOTOR). SHOWS UNLINED CORRUGATED METAL WALL. CAMERA FACING EAST. INEL PHOTO NUMBER 65-5433, TAKEN OCTOBER 20, 1965. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
3. VIEW OF NORTHEAST CORNER OF MST. NOTE: ENVIRONMENTAL DOOR ...
3. VIEW OF NORTHEAST CORNER OF MST. NOTE: ENVIRONMENTAL DOOR ON THE LOWER EAST SIDE OF THE NORTH FACE IS MISSING. NORTH CAMERA TOWER IN FOREGROUND. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 East, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
24. VIEW OF CANYON TAKEN FROM NORTH CANYON RIM AROUND ...
24. VIEW OF CANYON TAKEN FROM NORTH CANYON RIM AROUND 1920. CAMERA FACES SOUTH. VILLAGE IS TREE-COVERED AREA TO LEFT OF DAM AND POWERHOUSE. SUPERINTENDENT SAM GLASS'S ORCHARD IS DOWNSTREAM OF DAM ABOUT A QUARTER OF A MILE. - Swan Falls Village, Snake River, Kuna, Ada County, ID
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
PBF Reactor Building (PER620) under construction. Aerial view with camera ...
PBF Reactor Building (PER-620) under construction. Aerial view with camera facing northeast. Steel framework is exposed for west wing and high bay. Concrete block siding on east wing. Railroad crane set up on west side. Note trenches proceeding from front of building. Left trench is for secondary coolant and will lead to Cooling Tower. Shorter trench will contain cables leading to control area. Photographer: Larry Page. Date: March 22, 1967. INEEL negative no. 67-5025 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
NGEE Arctic Webcam Photographs, Barrow Environmental Observatory, Barrow, Alaska
Bob Busey; Larry Hinzman
2012-04-01
The NGEE Arctic Webcam (PTZ Camera) captures two views of seasonal transitions from its generally south-facing position on a tower located at the Barrow Environmental Observatory near Barrow, Alaska. Images are captured every 30 minutes. Historical images are available for download. The camera is operated by the U.S. DOE sponsored Next Generation Ecosystem Experiments - Arctic (NGEE Arctic) project.
A&M. Hot liquid waste treatment building (TAN616), south side. Camera ...
A&M. Hot liquid waste treatment building (TAN-616), south side. Camera facing north. Personnel door at left side of wall. Partial view of outdoor stairway to upper level platform. Note concrete construction. Photographer: Ron Paarmann. Date: September 22, 1997. INEEL negative no. HD-20-1-3 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
SPERTI contextual view of instrument cell building, PER606. South facade. ...
SPERT-I contextual view of instrument cell building, PER-606. South facade. Camera facing northwest. PBF Cooling Tower in view at right. High bay of PBF Reactor Building, PER-602, is further to right. PBF-625 at left edge of view. Date: August 2003. INEEL negative no. HD-35-3-4 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
PBF Cooling Tower. View of stairway to fan deck. Vents ...
PBF Cooling Tower. View of stairway to fan deck. Vents are made of redwood. Camera facing southwest toward north side of Cooling Tower. Siding is corrugated asbestos concrete. Photographer: Kirsh. Date: June 6, 1969. INEEL negative no. 69-3463 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Detail view of northwest side of Signal Corps Radar (S.C.R.) ...
Detail view of northwest side of Signal Corps Radar (S.C.R.) 296 Station 5 Transmitter Building foundation, showing portion of concrete gutter drainage system and asphalt floor tiles, camera facing north - Fort Barry, Signal Corps Radar 296, Station 5, Transmitter Building Foundation, Point Bonita, Marin Headlands, Sausalito, Marin County, CA
70. VIEW OF UNIT 2 THROUGH ACCESS DOOR, LOOKING DOWN ...
70. VIEW OF UNIT 2 THROUGH ACCESS DOOR, LOOKING DOWN AT MAIN SHAFT. NOTE WELDER'S SIGNATURE IN SHADOWS IN UPPER LEFT CORNER AND PHOTOGRAPHER'S STROBE POWER CABLE IN LOWER RIGHT CORNER. ORIENTATION OF CAMERA IS FACING LEFT BANK, PERPENDICULAR TO RIVER FLOW - Swan Falls Dam, Snake River, Kuna, Ada County, ID
PBF. Oblique and contextual view of PBF Cooling Tower, PER720. ...
PBF. Oblique and contextual view of PBF Cooling Tower, PER-720. Camera facing northeast. Auxiliary Building (PER-624) abuts Cooling Tower. Demolition equipment has arrived. Date: August 2003. INEEL negative no. HD-35-11-2 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
PBF (PER620) interior. Detail view of door in north wall ...
PBF (PER-620) interior. Detail view of door in north wall of reactor bay. Camera facing north. Note tonnage weighting of hatch covers in floor. Date: May 2004. INEEL negative no. HD-41-8-2 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
9. DETAIL VIEW OF BRIDGE CRANE ON WEST SIDE OF ...
9. DETAIL VIEW OF BRIDGE CRANE ON WEST SIDE OF BUILDING. CAMERA FACING NORTHEAST. CONTAMINATED AIR FILTERS LOADED IN TRANSPORT CASKS WERE TRANSFERRED TO VEHICLES AND SENT TO RADIOACTIVE WASTE MANAGEMENT COMPLEX FOR STORAGE. INEEL PROOF NUMBER HD-17-1. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID
MTR WING A, TRA604, INTERIOR. MAIN FLOOR. DETAIL VIEW INSIDE ...
MTR WING A, TRA-604, INTERIOR. MAIN FLOOR. DETAIL VIEW INSIDE LABORATORY 114. CAMERA FACING NORTH. DISPOSAL OF RADIOACTIVE MATERIALS IS UNDERWAY. INL NEGATIVE NO. HD46-12-4. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
View of Signal Corps Radar (S.C.R.) 296 Station 5 Transmitter ...
View of Signal Corps Radar (S.C.R.) 296 Station 5 Transmitter Building foundation, showing Fire Control Stations (Buildings 621 and 622) and concrete stairway (top left) camera facing southwest - Fort Barry, Signal Corps Radar 296, Station 5, Transmitter Building Foundation, Point Bonita, Marin Headlands, Sausalito, Marin County, CA
FAST CHOPPER BUILDING, TRA665. CONTEXTUAL VIEW: CHOPPER BUILDING IN CENTER. ...
FAST CHOPPER BUILDING, TRA-665. CONTEXTUAL VIEW: CHOPPER BUILDING IN CENTER. MTR REACTOR SERVICES BUILDING,TRA-635, TO LEFT; MTR BUILDING TO RIGHT. CAMERA FACING WEST. INL NEGATIVE NO. HD42-1. Mike Crane, Photographer, 3/2004 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
View of structures at rear of parcel with 12' scale ...
View of structures at rear of parcel with 12' scale (in tenths). From right: edge of Round House, Pencil house, Shell House, edge of School House. Heart Shrine made from mortared car headlights at frame left. Camera facing east. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2012-10-01
In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.
Multi-Angle View of the Canary Islands
NASA Technical Reports Server (NTRS)
2000-01-01
A multi-angle view of the Canary Islands in a dust storm, 29 February 2000. At left is a true-color image taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. This image was captured by the MISR camera looking at a 70.5-degree angle to the surface, ahead of the spacecraft. The middle image was taken by the MISR downward-looking (nadir) camera, and the right image is from the aftward 70.5-degree camera. The images are reproduced using the same radiometric scale, so variations in brightness, color, and contrast represent true variations in surface and atmospheric reflectance with angle. Windblown dust from the Sahara Desert is apparent in all three images, and is much brighter in the oblique views. This illustrates how MISR's oblique imaging capability makes the instrument a sensitive detector of dust and other particles in the atmosphere. Data for all channels are presented in a Space Oblique Mercator map projection to facilitate their co-registration. The images are about 400 km (250 miles)wide, with a spatial resolution of about 1.1 kilometers (1,200 yards). North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.
Get-in-the-Zone (GITZ) Transition Display Format for Changing Camera Views in Multi-UAV Operations
2008-12-01
the multi-UAV operator will witch between dynamic and static missions, each potentially involving very different scenario environments and task...another. Inspired by cinematography techniques to help audiences maintain spatial understanding of a scene across discrete film cuts, use of a
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
Numerical analysis of wavefront measurement characteristics by using plenoptic camera
NASA Astrophysics Data System (ADS)
Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun
2016-01-01
To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
NASA Astrophysics Data System (ADS)
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare
2017-11-01
This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Pattke, Marco; Martin, Manuel; Voit, Michael
2017-05-01
Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong
2005-01-01
Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.
Contextual view of Treasure Island from Yerba Buena Island, showing ...
Contextual view of Treasure Island from Yerba Buena Island, showing Palace of Fine and Decorative Arts (Building 3), far right, Hall of Transportation (Building 2), middle, and The Administration Building (Building 1), far left, Port of Trade Winds is in foreground, camera facing northwest - Golden Gate International Exposition, Treasure Island, San Francisco, San Francisco County, CA
PBF (PER620) interior. Detail view across top of reactor tank. ...
PBF (PER-620) interior. Detail view across top of reactor tank. Camera facing northeast. Ait tubing is cleanup equipment. Note projections from reactor structure above water level in tank. Date: May 2004. INEEL negative no. HD-41-5-1 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
79. ARAIII. Early construction view of GCRE reactor building (ARA608) ...
79. ARA-III. Early construction view of GCRE reactor building (ARA-608) showing deep excavation, reinforcing steel, and forms for concrete placement for reactor and other pits. Camera facing southeast. July 22, 1958. Ineel photo no. 58-3466. Photographer: Ken Mansfield. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
Detail view of southeast corner of Signal Corps Radar (S.C.R.) ...
Detail view of southeast corner of Signal Corps Radar (S.C.R.) 296 Station 5 Transmitter Building foundation, showing Signal Corps Radar (S.C.R.) 296 Station 5 Tower concrete pier in background, camera facing north - Fort Barry, Signal Corps Radar 296, Station 5, Transmitter Building Foundation, Point Bonita, Marin Headlands, Sausalito, Marin County, CA
Contextual view of Point Bonita Ridge, showing Bonita Ridge access ...
Contextual view of Point Bonita Ridge, showing Bonita Ridge access road retaining wall and location of Signal Corps Radar (S.C.R.) 296 Station 5 Transmitter Building foundation (see stake at center left), camera facing north - Fort Barry, Signal Corps Radar 296, Station 5, Transmitter Building Foundation, Point Bonita, Marin Headlands, Sausalito, Marin County, CA
Height and Motion of the Chikurachki Eruption Plume
NASA Technical Reports Server (NTRS)
2003-01-01
The height and motion of the ash and gas plume from the April 22, 2003, eruption of the Chikurachki volcano is portrayed in these views from the Multi-angle Imaging SpectroRadiometer (MISR). Situated within the northern portion of the volcanically active Kuril Island group, the Chikurachki volcano is an active stratovolcano on Russia's Paramushir Island (just south of the Kamchatka Peninsula).In the upper panel of the still image pair, this scene is displayed as a natural-color view from MISR's vertical-viewing (nadir) camera. The white and brownish-grey plume streaks several hundred kilometers from the eastern edge of Paramushir Island toward the southeast. The darker areas of the plume typically indicate volcanic ash, while the white portions of the plume indicate entrained water droplets and ice. According to the Kamchatkan Volcanic Eruptions Response Team (KVERT), the temperature of the plume near the volcano on April 22 was -12o C.The lower panel shows heights derived from automated stereoscopic processing of MISR's multi-angle imagery, in which the plume is determined to reach heights of about 2.5 kilometers above sea level. Heights for clouds above and below the eruption plume were also retrieved, including the high-altitude cirrus clouds in the lower left (orange pixels). The distinctive patterns of these features provide sufficient spatial contrast for MISR's stereo height retrieval to perform automated feature matching between the images acquired at different view angles. Places where clouds or other factors precluded a height retrieval are shown in dark gray.The multi-angle 'fly-over' animation (below) allows the motion of the plume and of the surrounding clouds to be directly observed. The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with the view from the 70-degree backward camera.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17776. The panels cover an area of approximately 296 kilometers x 216 kilometers (still images) and 185 kilometers x 154 kilometers (animation), and utilize data from blocks 50 to 51 within World Reference System-2 path 100.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology. [figure removed for brevity, see original siteEvidence for view-invariant face recognition units in unfamiliar face learning.
Etchells, David B; Brooks, Joseph L; Johnston, Robert A
2017-05-01
Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand
NASA Astrophysics Data System (ADS)
Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.
2015-08-01
In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.
PBF Reactor Building (PER620). Camera is in cab of electricpowered ...
PBF Reactor Building (PER-620). Camera is in cab of electric-powered rail crane and facing east. Reactor pit and storage canal have been shaped. Floors for wings on east and west side are above and below reactor in view. Photographer: Larry Page. Date: August 23, 1967. INEEL negative no. 67-4403 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Intermediate view synthesis for eye-gazing
NASA Astrophysics Data System (ADS)
Baek, Eu-Ttuem; Ho, Yo-Sung
2015-01-01
Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.
Optimal design and critical analysis of a high resolution video plenoptic demonstrator
NASA Astrophysics Data System (ADS)
Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne
2011-03-01
A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
NASA Astrophysics Data System (ADS)
López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge
2014-11-01
Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.
Endeavour on the Horizon False Color
2010-04-30
NASA Mars Exploration Rover Opportunity used its panoramic camera Pancam to capture this false-color view of the rim of Endeavour crater, the rover destination in a multi-year traverse along the sandy Martian landscape.
Intelligent viewing control for robotic and automation systems
NASA Astrophysics Data System (ADS)
Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.
1994-10-01
We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
The ideal subject distance for passport pictures.
Verhoff, Marcel A; Witzel, Carsten; Kreutz, Kerstin; Ramsthaler, Frank
2008-07-04
In an age of global combat against terrorism, the recognition and identification of people on document images is of increasing significance. Experiments and calculations have shown that the camera-to-subject distance - not the focal length of the lens - can have a significant effect on facial proportions. Modern passport pictures should be able to function as a reference image for automatic and manual picture comparisons. This requires a defined subject distance. It is completely unclear which subject distance, in the taking of passport photographs, is ideal for the recognition of the actual person. We show here that the camera-to-subject distance that is perceived as ideal is dependent on the face being photographed, even if the distance of 2m was most frequently preferred. So far the problem of the ideal camera-to-subject distance for faces has only been approached through technical calculations. We have, for the first time, answered this question experimentally with a double-blind experiment. Even if there is apparently no ideal camera-to-subject distance valid for every face, 2m can be proposed as ideal for the taking of passport pictures. The first step would actually be the determination of a camera-to-subject distance for the taking of passport pictures within the standards. From an anthropological point of view it would be interesting to find out which facial features allow the preference of a shorter camera-to-subject distance and which allow the preference of a longer camera-to-subject distance.
Thoma, Patrizia; Soria Bauser, Denise; Suchan, Boris
2013-08-30
This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
ETR BUILDING, TRA642, INTERIOR. FIRST FLOOR. REACTOR IS IN CENTER ...
ETR BUILDING, TRA-642, INTERIOR. FIRST FLOOR. REACTOR IS IN CENTER OF VIEW. CAMERA FACES NORTHWEST. NOTE CRANE RAILS AND DANGLING ELECTRICAL CABLE AT UPPER PART OF VIEW FOR "MOFFETT 2 TON" CRANE. INL NEGATIVE NO. HD46-14-4. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
A&M. Outdoor turntable. Aerial view of trackage as of 1954. ...
A&M. Outdoor turntable. Aerial view of trackage as of 1954. Camera faces northeast along line of track heading for the IET. Upper set of east/west tracks head for the hot shop; the other, for the cold shop. Date: November 24, 1954. INEEL negative no. 13203 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
37. View of the control house on the north tower ...
37. View of the control house on the north tower from the north span facing north. Note mirror and video camera used by bridge operator to check for vessel traffic prior to operating the bridge, loudspeaker and sirens to warn pedestrians and boaters. - Henry Ford Bridge, Spanning Cerritos Channel, Los Angeles-Long Beach Harbor, Los Angeles, Los Angeles County, CA
1. GENERAL VIEW OF SLC3W SHOWING SOUTH FACE AND EAST ...
1. GENERAL VIEW OF SLC-3W SHOWING SOUTH FACE AND EAST SIDE OF A-FRAME MOBILE SERVICE TOWER (MST). MST IN SERVICE POSITION OVER LAUNCHER AND FLAME BUCKET. CABLE TRAYS BETWEEN LAUNCH OPERATIONS BUILDING (BLDG. 763) AND SLC-3W IN FOREGROUND. LIQUID OXYGEN APRON VISIBLE IMMEDIATELY EAST (RIGHT) OF MST; FUEL APRON VISIBLE IMMEDIATELY WEST (LEFT) OF MST. A PORTION OF THE FLAME BUCKET VISIBLE BELOW THE SOUTH FACE OF THE MST. CAMERA TOWERS VISIBLE EAST OF MST BETWEEN ROAD AND CABLE TRAY, AND SOUTH OF MST NEAR LEFT MARGIN OF PHOTOGRAPH. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 West, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Mars Orbiter Camera Views the 'Face on Mars' - Best View from Viking
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.This Viking Orbiter image is one of the best Viking pictures of the area Cydonia where the 'Face' is located. Marked on the image are the 'footprint' of the high resolution (narrow angle) Mars Orbiter Camera image and the area seen in enlarged views (dashed box). See PIA01440-1442 for these images in raw and processed form.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.A&M. TAN607. Detail of fuel storage pool under construction. Camera ...
A&M. TAN-607. Detail of fuel storage pool under construction. Camera is on berm and facing northwest. Note depth of excavation. Formwork underway for floor and concrete walls of pool; wall between pool and vestibule. At center left of view, foundation for liquid waste treatment plant is poured. Date: August 25, 1953. INEEL negative no. 8541 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
ETR BUILDING, TRA642, INTERIOR. CONSOLE FLOOR, NORTH HALF. CAMERA IS ...
ETR BUILDING, TRA-642, INTERIOR. CONSOLE FLOOR, NORTH HALF. CAMERA IS NEAR NORTHWEST CORNER AND FACING SOUTH ALONG WEST CORRIDOR. STORAGE CANAL IS ALONG LEFT OF VIEW; PERIMETER WALL, ALONG RIGHT. CORRIDOR WAS ONE MEANS OF WALKING FROM NORTH TO SOUTH SIDE OF CONSOLE FLOOR. INL NEGATIVE NO. HD46-18-1. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Hayakawa, Yuichi S.; Obanawa, Hiroyuki; Yoshida, Hidetsugu; Naruhashi, Ryutaro; Okumura, Koji; Zaiki, Masumi
2016-04-01
Debris avalanche caused by sector collapse of a volcanic mountain often forms depositional landforms with characteristic surface morphology comprising hummocks. Geomorphological and sedimentological analyses of debris avalanche deposits (DAD) at the northeastern face of Mt. Erciyes in central Turkey have been performed to investigate the mechanisms and processes of the debris avalanche. The morphometry of hummocks provides an opportunity to examine the volumetric and kinematic characteristics of the DAD. Although the exact age has been unknown, the sector collapse of this DAD was supposed to have occurred in the late Pleistocene (sometime during 90-20 ka), and subsequent sediment supply from the DAD could have affected ancient human activities in the downstream basin areas. In order to measure detailed surface morphology and depositional structures of the DAD, we apply structure-from-motion multi-view stereo (SfM-MVS) photogrammetry using unmanned aerial system (UAS) and a handheld camera. The UAS, including small unmanned aerial vehicle (sUAV) and a digital camera, provides low-altitude aerial photographs to capture surface morphology for an area of several square kilometers. A high-resolution topographic data, as well as an orthorectified image, of the hummocks were then obtained from the digital elevation model (DEM), and the geometric features of the hummocks were examined. A handheld camera is also used to obtain photographs of outcrop face of the DAD along a road to support the seimentological investigation. The three-dimensional topographic models of the outcrop, with a panoramic orthorectified image projected on a vertical plane, were obtained. This data enables to effectively describe sedimentological structure of the hummock in DAD. The detailed map of the DAD is also further examined with a regional geomorphological map to be compared with other geomorphological features including fluvial valleys, terraces, lakes and active faults.
The So-Called 'Face on Mars' in Infrared
NASA Technical Reports Server (NTRS)
2002-01-01
[figure removed for brevity, see original site] (Released 24 July 2002) This set of THEMIS infrared images shows the so-called 'face on Mars' landform located in the northern plains of Mars near 40o N, 10o W (350 o E). The 'face' is located near the center of the image approximately 1/6 of the way down from the top, and is one of a large number of knobs, mesas, hills, and buttes that are visible in this THEMIS image. The THEMIS infrared camera has ten different filters between 6.2 and 15 micrometers - nine view the surface and one views the CO2 atmosphere. The calibrated and geometrically projected data from all of the nine surface-viewing filters are shown in this figure. The major differences seen in this region are due to temperature effects -- sunlit slopes are warm (bright), whereas those in shadow are cold (dark), The temperature in this scene ranges from 50 oC (darkest) to 15 oC (brightest). The major differences between the different filters are due to the expected variation in the amount of energy emitted from the surface at different wavelengths. Minor spectral differences (infrared 'color') also exist between the different filters, but these differences are small in this region due to the uniform composition of the rocks and soils exposed at the surface. The THEMIS infrared camera provides an excellent regional view of Mars - this image covers an area 32 kilometers (20 miles) by approximately 200 kilometers (125 miles) at a resolution of 100 meters per picture element ('pixel'). This image provides a broad perspective of the landscape and geology of the Cydonia region, showing numerous knobs and hills that have been eroded into a remarkable array of different shapes. In this 'big picture' view the Cydonia region is seen to be covered with dozens of interesting knobs and mesas that are similar in many ways to the knob named the 'face' - so many in fact that it requires care to discover the 'face' among this jumble of knobs and hills. The 3-km long 'face' knob was first imaged by the Viking spacecraft in the 1970's and was seen by some to resemble a face carved into the rocks of Mars. Since that time the Mars Orbiter Camera on the Mars Global Surveyor spacecraft has provided detailed views of this hill that clearly show that it is a normal geologic feature with slopes and ridges carved by eons of wind and downslope motion due to gravity. Many of the knobs in Cydonia, including the 'face', have several flat ledges partway up the hill slopes. These ledges are made of more resistant layers of rock and are the last remnants of layers that once were continuous across this entire region. Erosion has completely removed these layers in most places, leaving behind only the small isolated hills and knobs seen today.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
Integrated multi sensors and camera video sequence application for performance monitoring in archery
NASA Astrophysics Data System (ADS)
Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali
2018-03-01
This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.
Photogrammetry Toolbox Reference Manual
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Burner, Alpheus W.
2014-01-01
Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.
Efficient view based 3-D object retrieval using Hidden Markov Model
NASA Astrophysics Data System (ADS)
Jain, Yogendra Kumar; Singh, Roshan Kumar
2013-12-01
Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.
ETR BUILDING, TRA642. SOUTH SIDE VIEW INCLUDES SOUTH SIDES OF ...
ETR BUILDING, TRA-642. SOUTH SIDE VIEW INCLUDES SOUTH SIDES OF ETR BUILDING (HIGH ROOF LINE); ELECTRICAL BUILDING (ONE-STORY, MADE OF PUMICE BLOCKS), TRA-648; AND HEAT EXCHANGER BUILDING (WITH BUILDING NUMBERS), TRA-644. NOTE PROJECTION OF ELECTRICAL BUILDING AT LEFT EDGE OF VIEW. CAMERA FACES NORTH. INL NEGATIVE NO. HD46-37-3. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Snaptran2 experiment mounted on dolly being hauled by shielded locomotive ...
Snaptran-2 experiment mounted on dolly being hauled by shielded locomotive from IET towards A&M turntable. Note leads from experiment gathered at coupling bar in lower right of view. Another dolly in view at left. Camera facing southeast. Photographer: Page Comiskey. Date: August 25, 1965. INEEL negative no. 65-4503 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Gong, Mali; Guo, Rui; He, Sifeng; Wang, Wei
2016-11-01
The security threats caused by multi-rotor unmanned aircraft vehicles (UAVs) are serious, especially in public places. To detect and control multi-rotor UAVs, knowledge of IR characteristics is necessary. The IR characteristics of a typical commercial quad-rotor UAV are investigated in this paper through thermal imaging with an IR camera. Combining the 3D geometry and IR images of the UAV, a 3D IR characteristics model is established so that the radiant power from different views can be obtained. An estimation of operating range to detect the UAV is calculated theoretically using signal-to-noise ratio as the criterion. Field experiments are implemented with an uncooled IR camera in an environment temperature of 12°C and a uniform background. For the front view, the operating range is about 150 m, which is close to the simulation result of 170 m.
LOFT. Reactor support apparatus inside containment building (TAN650). Camera is ...
LOFT. Reactor support apparatus inside containment building (TAN-650). Camera is on crane rail level and facing northerly. View shows top two banks of round conduit openings on wall for electrical and other connections to control room. Ladders and platforms provide access to reactor instrumentation. Note hatch in floor and drain at edge of floor near wall. Date: 1974. INEEL negative no. 74-219 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas
2018-01-01
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.
Gakne, Paul Verlaine; O'Keefe, Kyle
2018-04-17
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.
Multi-view video segmentation and tracking for video surveillance
NASA Astrophysics Data System (ADS)
Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj
2009-05-01
Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.
NASA Astrophysics Data System (ADS)
Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi
2013-06-01
Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang's method [18].
WATER PUMP HOUSE, TRA619. VIEW OF PUMP HOUSE UNDER CONSTRUCTION. ...
WATER PUMP HOUSE, TRA-619. VIEW OF PUMP HOUSE UNDER CONSTRUCTION. CAMERA IS ON WATER TOWER AND FACES NORTHWEST. TWO RESERVOIR TANKS ALREADY ARE COMPLETED. NOTE EXCAVATIONS FOR PIPE LINES EXITING FROM BELOW GROUND ON SOUTH SIDE OF PUMP HOUSE. BUILDING AT LOWER RIGHT IS ELECTRICAL CONTROL BUILDING, TRA-623. SWITCHYARD IS IN LOWER RIGHT CORNER OF VIEW. INL NEGATIVE NO. 2753. Unknown Photographer, ca. 6/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
HOT CELL BUILDING, TRA632, INTERIOR. CONTEXTUAL VIEW OF HOT CELL ...
HOT CELL BUILDING, TRA-632, INTERIOR. CONTEXTUAL VIEW OF HOT CELL NO. 2 FROM STAIRWAY ALONG NORTH WALL. OBSERVATION WINDOW ALONG WEST SIDE BENEATH "CELL 2" SIGN. DOORWAY IN LEFT OF VIEW LEADS TO CELL 1 WORK AREA OR TO EXIT OUTDOORS TO NORTH. RADIATION DETECTION MONITOR TO RIGHT OF DOOR. CAMERA FACING SOUTHWEST. INL NEGATIVE NO. HD46-28-3. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.; Hill, K.; Bitter, M.; Rice, J. E.; Granetz, R.; Hubbard, A.; Irby, J.; Greenwald, M.; Marmar, E.; Tritz, K.; Stutman, D.; Stratton, B.; Efthimion, P.
2016-11-01
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (Te, nZ, ΔZeff, and ne,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well as transient levels of metal contributions. This technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.
NASA Technical Reports Server (NTRS)
2003-01-01
Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.ETR CONTROL BUILDING, TRA647, INTERIOR. CONTROL ROOM, CONTEXTUAL VIEW. INSTRUMENT ...
ETR CONTROL BUILDING, TRA-647, INTERIOR. CONTROL ROOM, CONTEXTUAL VIEW. INSTRUMENT PANELS AT REAR OF OPERATOR'S CONSOLE GAVE OPERATOR STATUS OF REACTOR PERFORMANCE, COOLANT-WATER CHARACTERISTICS AND OTHER INDICATORS. WINDOWS AT RIGHT LOOKED INTO ETR BUILDING FIRST FLOOR. CAMERA FACING EAST. INL NEGATIVE NO. HD42-6. Mike Crane, Photographer, 3/2004 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Contextual view of Warner's Ranch (ranch house in center and ...
Contextual view of Warner's Ranch (ranch house in center and trading post/barn on right), showing San Felipe Road and orientation of buildings in San Jose Valley. Note approximate locations of Overland Trail (now paved highway) in front of house and San Diego cutoff (dirt road) on left. Camera facing northwest. - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA
LOFT complex, aerial view taken on same on same day ...
LOFT complex, aerial view taken on same on same day as HAER photo ID-33-E-376. Camera facing south. Note curve of rail track toward hot shop (TAN-607). Earth shielding on control building (TAN-630) is partly removed, showing edge of concrete structure. Great southern butte on horizon. Date: 1975. INEEL negative no. 75-3693 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Views of the extravehicular activity of Astronaut Stewart during STS 41-B
NASA Technical Reports Server (NTRS)
1984-01-01
Close up frontal view of Astronaut Robert L. Stewart, mission specialist, as he participates in a extravehicular activity (EVA), a few meters away from the cabin of the shuttle Challenger. The open payload bay is reflected in his helmet visor as he faces the camera. Stewart is wearing the extravehicular mobility unit (EMU) and one of the manned maneuvering units (MMU) developed for this mission.
Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment
2017-06-01
two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology
ETR BUILDING, TRA642, INTERIOR. BASEMENT. CAMERA IS AT MIDPOINT OF ...
ETR BUILDING, TRA-642, INTERIOR. BASEMENT. CAMERA IS AT MIDPOINT OF SOUTH CORRIDOR AND FACES EAST, OPPOSITE DIRECTION FROM VIEWS ID-33-G-98 AND ID-33-G-99. STEEL DOOR AT LEFT OPENS BY ROLLING IT INTO CORRIDOR ON RAILS. TANK AT FAR END OF CORRIDOR IS EMERGENCY CORE COOLING CATCH TANK FOR A TEST LOOP. INL NEGATIVE NO. HD46-30-4. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Deep neural network features for horses identity recognition using multiview horses' face pattern
NASA Astrophysics Data System (ADS)
Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.
2017-03-01
To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in [5] a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.
Mixing Waters and Moving Ships off the North Carolina Coast
NASA Technical Reports Server (NTRS)
2000-01-01
The estuarine and marine environments of the United States' eastern seaboard provide the setting for a variety of natural and human activities associated with the flow of water. This set of Multi-angle Imaging SpectroRadiometer images from October 11, 2000 (Terra orbit 4344) captures the intricate system of barrier islands, wetlands, and estuaries comprising the coastal environments of North Carolina and southern Virginia. On the right-hand side of the images, a thin line of land provides a tenuous separation between the Albemarle and Pamlico Sounds and the Atlantic Ocean. The wetland communities of this area are vital to productive fisheries and water quality.The top image covers an area of about 350 kilometers x 260 kilometers and is a true-color view from MISR's 46-degree backward-looking camera. Looking away from the Sun suppresses glint from the reflective water surface and enables mapping the color of suspended sediments and plant life near the coast. Out in the open sea, the dark blue waters indicate the Gulf Stream. As it flows toward the northeast, this ocean current presses close to Cape Hatteras (the pointed cape in the lower portion of the images), and brings warm, nutrient-poor waters northward from equatorial latitudes. North Carolina's Outer Banks are often subjected to powerful currents and storms which cause erosion along the east-facing shorelines. In an effort to save the historic Cape Hatteras lighthouse from the encroaching sea, it was jacked out of the ground and moved about 350 meters in 1999.The bottom image was created with red band data from the 46-degree backward, 70-degree forward, and 26-degree forward cameras displayed as red, green, and blue, respectively. The color variations in this multi-angle composite indicate different angular (rather than spectral) signatures. Here, the increased reflection of land vegetation at the angle viewing away from the Sun causes a reddish tint. Water, on the other hand, appears predominantly in shades of blue and green due to the bright sunglint captured by the forward-viewing cameras. Contrasting angular signatures, most likely associated with variations in the orientation and slope of wind-driven surface waves, are apparent in the sunglint patterns.Details of human activities are visible in these images. Near the top center, the Chesapeake Bay Bridge-Tunnel complex, which links Norfolk with Virginia's eastern shore, can be seen. The locations of two tunnels which route automobiles below the water appear as gaps in the visible roadway. In the top image, the small white specks in the open waters of the Atlantic Ocean are ship wakes. The movements of the ships have been visualized by displaying the views from MISR's four backward-viewing cameras in an animated sequence (below). These cameras successively observe the same surface locations over a time interval of about 160 seconds. The large version of the animation covers an area of 135 kilometers x 130 kilometers. The land area on the left-hand side includes the birthplace of aviation, Kitty Hawk, where the Wright Brothers made their first sustained, powered flight in 1903. [figure removed for brevity, see original site] MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.EVA 2 activity on Flight Day 5 to service the Hubble Space Telescope
1997-02-15
S82-E-5429 (15 Feb. 1997) --- Astronauts Gregory J. Harbaugh (left) and Joseph R. Tanner (right) during Multi Layer Insulation (MLI) inspection in Bay 10. This view was taken with an Electronic Still Camera (ESC).
Aerial multi-camera systems: Accuracy and block triangulation issues
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Nex, Francesco; Toschi, Isabella; Remondino, Fabio
2015-03-01
Oblique photography has reached its maturity and has now been adopted for several applications. The number and variety of multi-camera oblique platforms available on the market is continuously growing. So far, few attempts have been made to study the influence of the additional cameras on the behaviour of the image block and comprehensive revisions to existing flight patterns are yet to be formulated. This paper looks into the precision and accuracy of 3D points triangulated from diverse multi-camera oblique platforms. Its coverage is divided into simulated and real case studies. Within the simulations, different imaging platform parameters and flight patterns are varied, reflecting both current market offerings and common flight practices. Attention is paid to the aspect of completeness in terms of dense matching algorithms and 3D city modelling - the most promising application of such systems. The experimental part demonstrates the behaviour of two oblique imaging platforms in real-world conditions. A number of Ground Control Point (GCP) configurations are adopted in order to point out the sensitivity of tested imaging networks and arising block deformations. To stress the contribution of slanted views, all scenarios are compared against a scenario in which exclusively nadir images are used for evaluation.
Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
NASA Astrophysics Data System (ADS)
Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert
2018-03-01
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Huveneers, Charlie; Fairweather, Peter G.
2018-01-01
Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386
ETR AND MTR COMPLEXES IN CONTEXT. CAMERA FACING NORTHERLY. FROM ...
ETR AND MTR COMPLEXES IN CONTEXT. CAMERA FACING NORTHERLY. FROM BOTTOM TO TOP: ETR COOLING TOWER, ELECTRICAL BUILDING AND LOW-BAY SECTION OF ETR BUILDING, HEAT EXCHANGER BUILDING (WITH U SHAPED YARD), COMPRESSOR BUILDING. MTR REACTOR SERVICES BUILDING IS ATTACHED TO SOUTH WALL OF MTR. WING A IS ATTACHED TO BALCONY FLOOR OF MTR. NEAR UPPER RIGHT CORNER OF VIEW IS MTR PROCESS WATER BUILDING. WING B IS AT FAR WEST END OF COMPLEX. NEAR MAIN GATE IS GAMMA FACILITY, WITH "COLD" BUILDINGS BEYOND: RAW WATER STORAGE TANKS, STEAM PLANT, MTR COOLING TOWER PUMP HOUSE AND COOLING TOWER. INL NEGATIVE NO. 56-4101. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Clouds and Ice of the Lambert-Amery System, East Antarctica
NASA Technical Reports Server (NTRS)
2002-01-01
These views from the Multi-angle Imaging SpectroRadiometer (MISR) illustrate ice surface textures and cloud-top heights over the Amery Ice Shelf/Lambert Glacier system in East Antarctica on October 25, 2002.The left-hand panel is a natural-color view from MISR's downward-looking (nadir) camera. The center panel is a multi-angular composite from three MISR cameras, in which color acts as a proxy for angular reflectance variations related to texture. Here, data from the red-band of MISR's 60o forward-viewing, nadir and 60o backward-viewing cameras are displayed as red, green and blue, respectively. With this display technique, surfaces which predominantly exhibit backward-scattering (generally rough surfaces) appear red/orange, while surfaces which predominantly exhibit forward-scattering (generally smooth surfaces) appear blue. Textural variation for both the grounded and sea ice are apparent. The red/orange pixels in the lower portion of the image correspond with a rough and crevassed region near the grounding zone, that is, the area where the Lambert and four other smaller glaciers merge and the ice starts to float as it forms the Amery Ice Shelf. In the natural-color view, this rough ice is spectrally blue in color.Clouds exhibit both forward and backward-scattering properties in the middle panel and thus appear purple, in distinct contrast with the underlying ice and snow. An additional multi-angular technique for differentiating clouds from ice is shown in the right-hand panel, which is a stereoscopically derived height field retrieved using automated pattern recognition involving data from multiple MISR cameras. Areas exhibiting insufficient spatial contrast for stereoscopic retrieval are shown in dark gray. Clouds are apparent as a result of their heights above the surface terrain. Polar clouds are an important factor in weather and climate. Inadequate characterization of cloud properties is currently responsible for large uncertainties in climate prediction models. Identification of polar clouds, mapping of their distributions, and retrieval of their heights provide information that will help to reduce this uncertainty.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire Earth between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 15171. The panels cover an area of 380 kilometers x 984 kilometers, and utilize data from blocks 145 to 151 within World Reference System-2 path 127.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center,Greenbelt, MD. JPL is a division of the California Institute of Technology.Two Perspectives on Forest Fire
NASA Technical Reports Server (NTRS)
2002-01-01
Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
SPERT1. Contextual aerial view of SPERTI Reactor Pit Building (PER605) ...
SPERT-1. Contextual aerial view of SPERT-I Reactor Pit Building (PER-605) at top of view, and its accessories: the earth-shielded instrument cell (PER-606) immediately adjacent to it; the Guard House (PER-607) to its right; and the Terminal Building in lower center of view (PER-604). Camera faces west. Road and buried line leaving view at right lead to Control Building (PER-601) out of view. Sagebrush vegetation has been scraped from around buildings. Photographer: R.G. Larsen. Date: June 6, 1955. INEEL negative no. 55-1477. - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
2016-09-05
Saturn's rings appear to bend as they pass behind the planet's darkened limb due to refraction by Saturn's upper atmosphere. The effect is the same as that seen in an earlier Cassini view (see PIA20491), except this view looks toward the unlit face of the rings, while the earlier image viewed the rings' sunlit side. The difference in illumination brings out some noticeable differences. The A ring is much darker here, on the rings' unlit face, since its larger particles primarily reflect light back toward the sun (and away from Cassini's cameras in this view). The narrow F ring (at bottom), which was faint in the earlier image, appears brighter than all of the other rings here, thanks to the microscopic dust that is prevalent within that ring. Small dust tends to scatter light forward (meaning close to its original direction of travel), making it appear bright when backlit. (A similar effect has plagued many a driver with a dusty windshield when driving toward the sun.) This view looks toward the unilluminated side of the rings from about 19 degrees below the ring plane. The image was taken in red light with the Cassini spacecraft narrow-angle camera on July 24, 2016. The view was acquired at a distance of approximately 527,000 miles (848,000 kilometers) from Saturn and at a sun-Saturn-spacecraft, or phase, angle of 169 degrees. Image scale is 3 miles (5 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20497
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
Cross-modal face recognition using multi-matcher face scores
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2015-05-01
The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.
2013-09-01
Ground testing of prototype hardware and processing algorithms for a Wide Area Space Surveillance System (WASSS) Neil Goldstein, Rainer A...at Magdalena Ridge Observatory using the prototype Wide Area Space Surveillance System (WASSS) camera, which has a 4 x 60 field-of-view , < 0.05...objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and a Principal Component Analysis based image
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
Center of parcel with picture tube wall along walkway. Leaning ...
Center of parcel with picture tube wall along walkway. Leaning Tower of Bottle Village at frame right; oblique view of Rumpus Room, remnants of Little Hut destroyed by Northridge earthquake at frame left. Camera facing northeast. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA
7. WASTE CALCINING FACILITY, LOOKING AT NORTH END OF BUILDING. ...
7. WASTE CALCINING FACILITY, LOOKING AT NORTH END OF BUILDING. CAMERA FACING SOUTH. TENT-ROOFED COVER IN RIGHT OF VIEW IS A TEMPORARY WEATHER-PROOFING SHELTER OVER THE BLOWER PIT IN CONNECTION WITH DEMOLITION PROCEDURES. SMALL BUILDING CPP-667 IN CENTER OF VIEW WAS USED FOR SUPPLEMENTARY OFFICE SPACE BY HEALTH PHYSICISTS AND OTHERS. INEEL PROOF SHEET NOT NUMBERED. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID
Snowstorm Along the China-Mongolia-Russia Borders
NASA Technical Reports Server (NTRS)
2004-01-01
Heavy snowfall on March 12, 2004, across north China's Inner Mongolia Autonomous Region, Mongolia and Russia, caused train and highway traffic to stop for several days along the Russia-China border. This pair of images from the Multi-angle Imaging SpectroRadiometer (MISR) highlights the snow and surface properties across the region on March 13. The left-hand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The right-hand image is a multi-angle false-color view made from the red band data of the 46-degree aftward camera, the nadir camera, and the 46-degree forward camera. About midway between the frozen expanse of China's Hulun Nur Lake (along the right-hand edge of the images) and Russia's Torey Lakes (above image center) is a dark linear feature that corresponds with the China-Mongolia border. In the upper portion of the images, many small plumes of black smoke rise from coal and wood fires and blow toward the southeast over the frozen lakes and snow-covered grasslands. Along the upper left-hand portion of the images, in Russia's Yablonovyy mountain range and the Onon River Valley, the terrain becomes more hilly and forested. In the nadir image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the multi-angle composite, open-canopy forested areas are indicated by green hues. Since this is a multi-angle composite, the green color arises not from the color of the leaves but from the architecture of the surface cover. The green areas appear brighter at the nadir angle than at the oblique angles because more of the snow-covered surface in the gaps between the trees is visible. Color variations in the multi-angle composite also indicate angular reflectance properties for areas covered by snow and ice. The light blue color of the frozen lakes is due to the increased forward scattering of smooth ice, and light orange colors indicate rougher ice or snow, which scatters more light in the backward direction. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire Earth between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 22525. The panels cover an area of about 355 kilometers x 380 kilometers, and utilize data from blocks 50 to 52 within World Reference System-2 path 126. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.DEMINERALIZER BUILDING, TRA608. CAMERA IS ON RAW WATER TOWER AND ...
DEMINERALIZER BUILDING, TRA-608. CAMERA IS ON RAW WATER TOWER AND FACES WEST. STEAM PLANT, TRA-609, AT UPPER EDGE OF VIEW. ABSENCE OF ROOF EXPOSES FIVE-BAY STRUCTURE AND INTERIOR DIVISION OF SPACE. CORRIDOR AT WEST END OF BUILDING WILL SEPARATE LABORATORY AND OFFICE SPACE FROM POTABLE WATER TANKS. ALONG NORTH WALL ARE SPACES FOR CATION AND ANION EXCHANGE UNITS. PENTHOUSE WILL ENCLOSE DEGASSIFIER. TANK AT LEFT (SOUTH) OF BUILDING STORES DEMINERALIZED WATER. NOTE BRINE STORAGE PIT, TRA-631, AT RIGHT OF VIEW, ABOVE PAIR OF CAUSTIC STORAGE TANKS. NOTE TRENCHES FOR BURIED WATER PIPES. INL NEGATIVE NO. 2732. Unknown Photographer, 6/29/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Multifunctional microcontrollable interface module
NASA Astrophysics Data System (ADS)
Spitzer, Mark B.; Zavracky, Paul M.; Rensing, Noa M.; Crawford, J.; Hockman, Angela H.; Aquilino, P. D.; Girolamo, Henry J.
2001-08-01
This paper reports the development of a complete eyeglass- mounted computer interface system including display, camera and audio subsystems. The display system provides an SVGA image with a 20 degree horizontal field of view. The camera system has been optimized for face recognition and provides a 19 degree horizontal field of view. A microphone and built-in pre-amp optimized for voice recognition and a speaker on an articulated arm are included for audio. An important feature of the system is a high degree of adjustability and reconfigurability. The system has been developed for testing by the Military Police, in a complete system comprising the eyeglass-mounted interface, a wearable computer, and an RF link. Details of the design, construction, and performance of the eyeglass-based system are discussed.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited).
Delgado-Aparicio, L F; Maddox, J; Pablant, N; Hill, K; Bitter, M; Rice, J E; Granetz, R; Hubbard, A; Irby, J; Greenwald, M; Marmar, E; Tritz, K; Stutman, D; Stratton, B; Efthimion, P
2016-11-01
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e , n Z , ΔZ eff , and n e,fast ). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well as transient levels of metal contributions. This technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.
Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.
2009-01-01
We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.
CONTEXTUAL AERIAL VIEW OF "EXCLUSION" MTR AREA WITH IDAHO CHEMICAL ...
CONTEXTUAL AERIAL VIEW OF "EXCLUSION" MTR AREA WITH IDAHO CHEMICAL PROCESSING PLANT IN BACKGROUND AT CENTER TOP OF VIEW. CAMERA FACING EAST. EXCLUSION GATE HOUSE AT LEFT OF VIEW. BEYOND MTR BUILDING AND ITS WING, THE PROCESS WATER BUILDING AND WORKING RESERVOIR ARE LEFT-MOST. FAN HOUSE AND STACK ARE TO ITS RIGHT. PLUG STORAGE BUILDING IS RIGHT-MOST STRUCTURE. NOTE FAN LOFT ABOVE MTR BUILDING'S ONE-STORY WING. THIS WAS LATER CONVERTED FOR OFFICES. INL NEGATIVE NO. 3610. Unknown Photographer, 10/30/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
MATERIALS TESTING REACTOR (MTR) BUILDING, TRA603. CONTEXTUAL VIEW OF MTR ...
MATERIALS TESTING REACTOR (MTR) BUILDING, TRA-603. CONTEXTUAL VIEW OF MTR BUILDING SHOWING NORTH SIDES OF THE HIGH-BAY REACTOR BUILDING, ITS SECOND/THIRD FLOOR BALCONY LEVEL, AND THE ATTACHED ONE-STORY OFFICE/LABORATORY BUILDING, TRA-604. CAMERA FACING SOUTHEAST. VERTICAL CONCRETE-SHROUDED BEAMS SUPPORT PRECAST CONCRETE PANELS. CONCRETE PROJECTION FORMED AS A BUNKER AT LEFT OF VIEW IS TRA-657, PLUG STORAGE BUILDING. INL NEGATIVE NO. HD46-42-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
PROCESS WATER BUILDING, TRA605, INTERIOR. FIRST FLOOR. CAMERA IS IN ...
PROCESS WATER BUILDING, TRA-605, INTERIOR. FIRST FLOOR. CAMERA IS IN SOUTHEAST CORNER AND FACES NORTHWEST. CONTROL ROOM AT RIGHT. CRANE MONORAIL IS OVER FLOOR HATCHES AND FLOOR OPENINGS. SIX VALVE HANDWHEELS ALONG FAR WALL IN LEFT CENTER VIEW. SEAL TANK IS ON OTHER SIDE OF WALL; PROCESS WATER PIPES ARE BELOW VALVE WHEELS. NOTE CURBS AROUND FLOOR OPENINGS. INL NEGATIVE NO. HD46-26-3. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Multi-stream face recognition for crime-fighting
NASA Astrophysics Data System (ADS)
Jassim, Sabah A.; Sellahewa, Harin
2007-04-01
Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.
1991-04-03
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
1995-08-29
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
Context View from 11' on ladder from southeast corner of ...
Context View from 11' on ladder from southeast corner of Bottle Village parcel, just inside fence. Doll Head Shrine at far left frame, Living Trailer (c.1960 "Spartanette") in center frame. Little Wishing Well at far right frame. Some shrines and small buildings were destroyed in the January 1994 Northridge earthquake, and only their perimeter walls and foundations exist. Camera facing north northwest. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
PROCESS WATER BUILDING, TRA605, INTERIOR. FIRST FLOOR. ELECTRICAL EQUIPMENT IN ...
PROCESS WATER BUILDING, TRA-605, INTERIOR. FIRST FLOOR. ELECTRICAL EQUIPMENT IN LEFT HALF OF VIEW. CAMERA IS IN NORTHWEST CORNER FACING SOUTHEAST. INL NEGATIVE NO. HD46-27-1. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Oblique Aerial Photography Tool for Building Inspection and Damage Assessment
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.
2014-11-01
Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.
Real-time vehicle matching for multi-camera tunnel surveillance
NASA Astrophysics Data System (ADS)
Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried
2011-03-01
Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.
SPERTI/PBF. Contextual aerial view after PBF had begun operating, but ...
SPERT-I/PBF. Contextual aerial view after PBF had begun operating, but prior to expansion of southwest corner of Reactor Building (PER-620). Camera facing northeast. Reactor Building in center of view. Cooling Tower (PER-720) to its left. Warehouse (PER-625) at lower left was built in 1966. SPERT-I Reactor Building (PER-605) and Instrument Cell Building (PER-604) at right of view. Buried cables and piping proceed from PBF toward lower edge of view to Control Building further south and out of view. Photographer: Farmer. Date: March 26, 1976. INEEL negative no. 76-1344 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Freeman, S. E.; Freeman, L. A.
2016-02-01
Coral reef ecosystems face many anthropogenic threats. There are urgent requirements for improved monitoring and management. Conventional assessment methods using SCUBA are costly and prone to bias and under-sampling. Here, three approaches to understanding coral reef ecology are combined to aid the goal of enhanced passive monitoring in the future: statistical analysis of oceanographic habitats, remote cameras for nocturnal surveys of benthic fauna, and soundscape analysis in the context of oceanographic setting and ecological metrics collected in-situ. Hawaiian reefs from Kure Atoll to the island of Hawaii, an area spanning two oceanographic habitats, are assessed. Multivariate analysis of acoustic, remote camera, and in-situ observational data showed significant differences in more than 20 percent of ecological and acoustic variables when grouped by oceanic regime, suggesting that large-scale oceanography substantially influences local ecological states and associated soundscapes. Acoustic variables further delineated sites by island, suggesting local conditions influence the soundscape to a greater degree. While the number of invertebrates (with an emphasis on crustaceans and echinoderms) imaged using remote cameras correlated with a number of acoustic metrics, an increasingly higher correlation between invertebrate density and spectral level was observed as acoustic bands increased in frequency from 2 to 20 kHz. In turn, correlation was also observed between the number of predatory fish and sound levels above 2 kHz, suggesting a connection between the number of invertebrates, sound levels at higher frequencies, and the presence of their predators. Comparisons between sound recordings and diversity indices calculated from observational and remote camera data indicate that greater diversity in fishes and benthic invertebrates is associated with a larger change in sound levels between day and night. Interdisciplinary analyses provide a novel view to underwater ecology, and can reveal new quantitative metrics that may be more efficiently sampled. These techniques may be used to detect subtle yet important shifts in ecosystem function, critical for effective marine resource management in the face of environmental changes that occur over multi-year timescales.
Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes
NASA Astrophysics Data System (ADS)
Teppati Losè, L.; Chiabrando, F.; Spanò, A.
2018-05-01
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
Dynamic Geometry Capture with a Multi-View Structured-Light System
2014-12-19
funding was never a problem during my studies . One of the best parts of my time at UC Berkeley has been working with colleagues within the Video and...scientific and medical applications such as quantifying improvement in physical therapy and measuring unnatural poses in ergonomic studies . Specifically... cases with limited scene texture. This direct generation of surface geometry provides us with a distinct advantage over multi-camera based systems. For
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e, n Z, ΔZ eff, and n e,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well asmore » transient levels of metal contributions. As a result, this technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.« less
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.; ...
2016-11-14
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e, n Z, ΔZ eff, and n e,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well asmore » transient levels of metal contributions. As a result, this technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.« less
Fisheye camera around view monitoring system
NASA Astrophysics Data System (ADS)
Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen
2018-04-01
360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.
A higher-speed compressive sensing camera through multi-diode design
NASA Astrophysics Data System (ADS)
Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore
2013-05-01
Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.
Multi-scale auroral observations in Apatity: winter 2010-2011
NASA Astrophysics Data System (ADS)
Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.
2012-03-01
Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.
Multi-scale auroral observations in Apatity: winter 2010-2011
NASA Astrophysics Data System (ADS)
Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.
2011-12-01
Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.
A Summer View of Russia's Lena Delta and Olenek
NASA Technical Reports Server (NTRS)
2004-01-01
These views of the Russian Arctic were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on July 11, 2004, when the brief arctic summer had transformed the frozen tundra and the thousands of lakes, channels, and rivers of the Lena Delta into a fertile wetland, and when the usual blanket of thick snow had melted from the vast plains and taiga forests. This set of three images cover an area in the northern part of the Eastern Siberian Sakha Republic. The Olenek River wends northeast from the bottom of the images to the upper left, and the top portions of the images are dominated by the delta into which the mighty Lena River empties when it reaches the Laptev Sea. At left is a natural color image from MISR's nadir (vertical-viewing) camera, in which the rivers appear murky due to the presence of sediment, and photosynthetically-active vegetation appears green. The center image is also from MISR's nadir camera, but is a false color view in which the predominant red color is due to the brightness of vegetation at near-infrared wavelengths. The most photosynthetically active parts of this area are the Lena Delta, in the lower half of the image, and throughout the great stretch of land that curves across the Olenek River and extends northeast beyond the relatively barren ranges of the Volyoi mountains (the pale tan-colored area to the right of image center). The right-hand image is a multi-angle false-color view made from the red band data of the 60o backward, nadir, and 60o forward cameras, displayed as red, green and blue, respectively. Water appears blue in this image because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. Much of the landscape and many low clouds appear purple since these surfaces are both forward and backward scattering, and clouds that are further from the surface appear in a different spot for each view angle, creating a rainbow-like appearance. However, the vegetated region that is darker green in the natural color nadir image, also appears to exhibit a faint greenish hue in the multi-angle composite. A possible explanation for this subtle green effect is that the taiga forest trees (or dwarf-shrubs) are not too dense here. Since the the nadir camera is more likly to observe any gaps between the trees or shrubs, and since the vegetation is not as bright (in the red band) as the underlying soil or surface, the brighter underlying surface results in an area that is relatively brighter at the nadir view angle. Accurate maps of vegetation structural units are an essential part of understanding the seasonal exchanges of energy and water at the Earth's surface, and of preserving the biodiversity in these regions. The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 24273. The panels cover an area of about 230 kilometers x 420 kilometers, and utilize data from blocks 30 to 34 within World Reference System-2 path 134. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Multi-layer Clouds Over the South Indian Ocean
NASA Technical Reports Server (NTRS)
2003-01-01
The complex structure and beauty of polar clouds are highlighted by these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean, to the north of Enderbyland, East Antarctica.The image at left was created by overlying a natural-color view from MISR's downward-pointing (nadir) camera with a color-coded stereo height field. MISR retrieves heights by a pattern recognition algorithm that utilizes multiple view angles to derive cloud height and motion. The opacity of the height field was then reduced until the field appears as a translucent wash over the natural-color image. The resulting purple, cyan and green hues of this aesthetic display indicate low, medium or high altitudes, respectively, with heights ranging from less than 2 kilometers (purple) to about 8 kilometers (green). In the lower right corner, the edge of the Antarctic coastline and some sea ice can be seen through some thin, high cirrus clouds.The right-hand panel is a natural-color image from MISR's 70-degree backward viewing camera. This camera looks backwards along the path of Terra's flight, and in the southern hemisphere the Sun is in front of this camera. This perspective causes the cloud-tops to be brightly outlined by the sun behind them, and enhances the shadows cast by clouds with significant vertical structure. An oblique observation angle also enhances the reflection of light by atmospheric particles, and accentuates the appearance of polar clouds. The dark ocean and sea ice that were apparent through the cirrus clouds at the bottom right corner of the nadir image are overwhelmed by the brightness of these clouds at the oblique view.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17794. The panels cover an area of 335 kilometers x 605 kilometers, and utilize data from blocks 142 to 145 within World Reference System-2 path 155.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.ETR BUILDING, TRA642, INTERIOR. BASEMENT. CORRIDOR ALONG WEST WALL OF ...
ETR BUILDING, TRA-642, INTERIOR. BASEMENT. CORRIDOR ALONG WEST WALL OF BUILDING, WHICH IS AT RIGHT OF VIEW. AUDIO ALARM IS ALONG WALL AT RIGHT. CAMERA FACES SOUTH. INL NEGATIVE NO. HD46-30-1. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
ADM. Tanks: from left to right: fuel oil tank, fuel ...
ADM. Tanks: from left to right: fuel oil tank, fuel pump house (TAN-611), engine fuel tank, water pump house, water storage tank. Camera facing northwest. Not edge of shielding berm at left of view. Date: November 25, 1953. INEEL negative no. 9217 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
LOFT. Containment building entry, an adapted use of TAN624, which ...
LOFT. Containment building entry, an adapted use of TAN-624, which originated as the mobile test building for the ANP program. Camera facing north. Note four-rail track entered building stack at right of view. Date: March 2004. INEEL negative no. HD-39-4-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Camera Control and Geo-Registration for Video Sensor Networks
NASA Astrophysics Data System (ADS)
Davis, James W.
With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.
MISR Global Images See the Light of Day
NASA Technical Reports Server (NTRS)
2002-01-01
As of July 31, 2002, global multi-angle, multi-spectral radiance products are available from the MISR instrument aboard the Terra satellite. Measuring the radiative properties of different types of surfaces, clouds and atmospheric particulates is an important step toward understanding the Earth's climate system. These images are among the first planet-wide summary views to be publicly released from the Multi-angle Imaging SpectroRadiometer experiment. Data for these images were collected during the month of March 2002, and each pixel represents monthly-averaged daylight radiances from an area measuring 1/2 degree in latitude by 1/2 degree in longitude.The top panel is from MISR's nadir (vertical-viewing) camera and combines data from the red, green and blue spectral bands to create a natural color image. The central view combines near-infrared, red, and green spectral data to create a false-color rendition that enhances highly vegetated terrain. It takes 9 days for MISR to view the entire globe, and only areas within 8 degrees of latitude of the north and south poles are not observed due to the Terra orbit inclination. Because a single pole-to-pole swath of MISR data is just 400 kilometers wide, multiple swaths must be mosaiced to create these global views. Discontinuities appear in some cloud patterns as a consequence of changes in cloud cover from one day to another.The lower panel is a composite in which red, green, and blue radiances from MISR's 70-degree forward-viewing camera are displayed in the northern hemisphere, and radiances from the 70-degree backward-viewing camera are displayed in the southern hemisphere. At the March equinox (spring in the northern hemisphere, autumn in the southern hemisphere), the Sun is near the equator. Therefore, both oblique angles are observing the Earth in 'forward scattering', particularly at high latitudes. Forward scattering occurs when you (or MISR) observe an object with the Sun at a point in the sky that is in front of you. Relative to the nadir view, this geometry accentuates the appearance of polar clouds, and can even reveal clouds that are invisible in the nadir direction. In relatively clear ocean areas, the oblique-angle composite is generally brighter than its nadir counterpart due to enhanced reflection of light by atmospheric particulates.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.An affordable wearable video system for emergency response training
NASA Astrophysics Data System (ADS)
King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.
2009-02-01
Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.
Improved iris localization by using wide and narrow field of view cameras for iris recognition
NASA Astrophysics Data System (ADS)
Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung
2013-10-01
Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.
A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.
Leung, Brian; Chau, Tom
2010-01-01
The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.
Smoke from Fires in Southern Mexico
NASA Technical Reports Server (NTRS)
2002-01-01
On May 2, 2002, numerous fires in southern Mexico sent smoke drifting northward over the Gulf of Mexico. These views from the Multi-angle Imaging SpectroRadiometer illustrate the smoke extent over parts of the Gulf and the southern Mexican states of Tabasco, Campeche and Chiapas. At the same time, dozens of other fires were also burning in the Yucatan Peninsula and across Central America. A similar situation occurred in May and June of 1998, when Central American fires resulted in air quality warnings for several U.S. States.The image on the left is a natural color view acquired by MISR's vertical-viewing (nadir) camera. Smoke is visible, but sunglint in some ocean areas makes detection difficult. The middle image, on the other hand, is a natural color view acquired by MISR's 70-degree backward-viewing camera; its oblique view angle simultaneously suppresses sunglint and enhances the smoke. A map of aerosol optical depth, a measurement of the abundance of atmospheric particulates, is provided on the right. This quantity is retrieved using an automated computer algorithm that takes advantage of MISR's multi-angle capability. Areas where no retrieval occurred are shown in black.The images each represent an area of about 380 kilometers x 1550 kilometers and were captured during Terra orbit 12616.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.The NIKA2 Large Field-of-View Millimeter Continuum Camera for the 30-M IRAM Telescope
NASA Astrophysics Data System (ADS)
Monfardini, Alessandro
2018-01-01
We have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institute of Millimetric Radio Astronomy) telescope at Pico Veleta, and preliminary science-grade results.
The Faces in Infant-Perspective Scenes Change over the First Year of Life
Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.
2015-01-01
Mature face perception has its origins in the face experiences of infants. However, little is known about the basic statistics of faces in early visual environments. We used head cameras to capture and analyze over 72,000 infant-perspective scenes from 22 infants aged 1-11 months as they engaged in daily activities. The frequency of faces in these scenes declined markedly with age: for the youngest infants, faces were present 15 minutes in every waking hour but only 5 minutes for the oldest infants. In general, the available faces were well characterized by three properties: (1) they belonged to relatively few individuals; (2) they were close and visually large; and (3) they presented views showing both eyes. These three properties most strongly characterized the face corpora of our youngest infants and constitute environmental constraints on the early development of the visual system. PMID:26016988
Semi-automatic 2D-to-3D conversion of human-centered videos enhanced by age and gender estimation
NASA Astrophysics Data System (ADS)
Fard, Mani B.; Bayazit, Ulug
2014-01-01
In this work, we propose a feasible 3D video generation method to enable high quality visual perception using a monocular uncalibrated camera. Anthropometric distances between face standard landmarks are approximated based on the person's age and gender. These measurements are used in a 2-stage approach to facilitate the construction of binocular stereo images. Specifically, one view of the background is registered in initial stage of video shooting. It is followed by an automatically guided displacement of the camera toward its secondary position. At the secondary position the real-time capturing is started and the foreground (viewed person) region is extracted for each frame. After an accurate parallax estimation the extracted foreground is placed in front of the background image that was captured at the initial position. So the constructed full view of the initial position combined with the view of the secondary (current) position, form the complete binocular pairs during real-time video shooting. The subjective evaluation results present a competent depth perception quality through the proposed system.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
PLUG STORAGE BUILDING, TRA611, AWAITS SHIELDING SOIL TO BE PLACED ...
PLUG STORAGE BUILDING, TRA-611, AWAITS SHIELDING SOIL TO BE PLACED OVER PLUG STORAGE TUBES. WING WALLS WILL SUPPORT EARTH FILL. MTR, PROCESS WATER BUILDING, AND WORKING RESERVOIR IN VIEW BEYOND PLUG STORAGE. CAMERA FACES NORTHEAST. INL NEGATIVE NO. 2949. Unknown Photographer, 7/30/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
PBF detail of metal pedestrian bridge over exposed control cables, ...
PBF detail of metal pedestrian bridge over exposed control cables, which run between Control (PER-619) and Reactor Buildings (PER-620). Camera facing northwest. Southwest corner of PER-620 at upper right of view. Date: May 2004. INEEL negative no. HD-41-6-3 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
REACTOR SERVICES BUILDING, TRA635, INTERIOR. ALSO KNOWN AS MATERIAL RECEIVING ...
REACTOR SERVICES BUILDING, TRA-635, INTERIOR. ALSO KNOWN AS MATERIAL RECEIVING AREA AND LABORATORY. CAMERA ON FIRST FLOOR FACING NORTH TOWARD MTR BUILDING. MOCK-UP AREA WAS TO THE RIGHT OF VIEW. INL NEGATIVE NO. HD46-10-1. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
CONTEXTUAL AERIAL VIEW OF "COLD" NORTH HALF OF MTR COMPLEX. ...
CONTEXTUAL AERIAL VIEW OF "COLD" NORTH HALF OF MTR COMPLEX. CAMERA FACING EASTERLY. FOREGROUND CORNER CONTAINS OIL STORAGE TANKS. WATER TANKS AND WELL HOUSES ARE BEYOND THEM TO THE LEFT. LARGE LIGHT-COLORED BUILDING IN CENTER OF VIEW IS STEAM PLANT. DEMINERALIZER AND WATER STORAGE TANK ARE BEYOND. SIX-CELL COOLING TOWER AND ITS PUMP HOUSE ARE ABOVE IT IN VIEW. SERVICE BUILDINGS INCLUDING CANTEEN ARE ON NORTH SIDE OF ROAD. "EXCLUSION" AREA IS BEYOND ROAD. COMPARE LOCATION OF EXCLUSION-AREA GATE WITH PHOTO ID-33-G-202. INL NEGATIVE NO. 3608. Unknown Photographer, 10/30/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
Ultra-compact imaging system based on multi-aperture architecture
NASA Astrophysics Data System (ADS)
Meyer, Julia; Brückner, Andreas; Leitel, Robert; Dannberg, Peter; Bräuer, Andreas; Tünnermann, Andreas
2011-03-01
As a matter of course, cameras are integrated in the field of information and communication technology. It can be observed, that there is a trend that those cameras get smaller and at the same time cheaper. Because single aperture have a limit of miniaturization, while simultaneously keeping the same space-bandwidth-product and transmitting a wide field of view, there is a need of new ideas like the multi aperture optical systems. In the proposed camera system the image is formed with many different channels each consisting of four microlenses which are arranged one after another in different microlens arrays. A partial image which fits together with the neighbouring one is formed in every single channel, so that a real erect image is generated and a conventional image sensor can be used. The microoptical fabrication process and the assembly are well established and can be carried out on wafer-level. Laser writing is used for the fabrication of the masks. UV-lithography, a reflow process and UV-molding is needed for the fabrication of the apertures and the lenses. The developed system is very small in terms of both length and lateral dimensions and has a VGA resolution and a diagonal field of view of 65 degrees. This microoptical vision system is appropriate for being implemented in electronic devices such as webcams integrated in notebookdisplays.
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel
2013-01-01
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel
2012-12-27
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.
Variable field-of-view visible and near-infrared polarization compound-eye endoscope.
Kagawa, K; Shogenji, R; Tanaka, E; Yamada, K; Kawahito, S; Tanida, J
2012-01-01
A multi-functional compound-eye endoscope enabling variable field-of-view and polarization imaging as well as extremely deep focus is presented, which is based on a compact compound-eye camera called TOMBO (thin observation module by bound optics). Fixed and movable mirrors are introduced to control the field of view. Metal-wire-grid polarizer thin film applicable to both of visible and near-infrared lights is attached to the lenses in TOMBO and light sources. Control of the field-of-view, polarization and wavelength of the illumination realizes several observation modes such as three-dimensional shape measurement, wide field-of-view, and close-up observation of the superficial tissues and structures beneath the skin.
Low power multi-camera system and algorithms for automated threat detection
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin
2013-05-01
A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.
92. ARAIII. Overall view of GCRE area in 1959. From ...
92. ARA-III. Overall view of GCRE area in 1959. From left to right: ARA-607 (control building), ARA-608 (with high-bay, reactor building), ARA-610 (service building), ARA-609 (guard house), ARA-709 (water storage tank) ARA-710 in front of ARA-709 (fuel oil tank), ARA-611 (well pumphouse), and the cooling tower. Note petro-chem stack and other stacks emerging from reactor building. Camera facing northeast. August 1959. Ineel photo no. 59-4444. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
Skylab beverage container filled with orange juice held by Astronaut Conrad
NASA Technical Reports Server (NTRS)
1973-01-01
An accordian-style beverage dispenser filled with orange juice is held by Astronaut Charles Conrad Jr., Skylab 2 commander, in this close-up view which is a reproduction taken from a color television transmission made by a TV camera aboard the Skylab 1 and 2 space station cluster in Earth orbit. Conrad (head and face not in view) is seated at the wardroom table in the crew quarters of the Orbital Workshop. The dispenser contained beverage crystals, and Conrad has just added the prescribed amount of water to make the orange drink.
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
Watching elderly and disabled person's physical condition by remotely controlled monorail robot
NASA Astrophysics Data System (ADS)
Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru
2001-10-01
We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.
Multi-camera digital image correlation method with distributed fields of view
NASA Astrophysics Data System (ADS)
Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata
2017-11-01
A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.
The effects of spatially displaced visual feedback on remote manipulator performance
NASA Technical Reports Server (NTRS)
Smith, Randy L.; Stuart, Mark A.
1989-01-01
The effects of spatially displaced visual feedback on the operation of a camera viewed remote manipulation task are analyzed. A remote manipulation task is performed by operators exposed to the following different viewing conditions: direct view of the work site; normal camera view; reversed camera view; inverted/reversed camera view; and inverted camera view. The task completion performance times are statistically analyzed with a repeated measures analysis of variance, and a Newman-Keuls pairwise comparison test is administered to the data. The reversed camera view is ranked third out of four camera viewing conditions, while the normal viewing condition is found significantly slower than the direct viewing condition. It is shown that generalization to remote manipulation applications based upon the results of direct manipulation studies are quite useful, but they should be made cautiously.
Summer Harvest in Saratov, Russia
NASA Technical Reports Server (NTRS)
2002-01-01
Russia's Saratov Oblast (province) is located in the southeastern portion of the East-European plain, in the Lower Volga River Valley. Southern Russia produces roughly 40 percent of the country's total agricultural output, and Saratov Oblast is the largest producer of grain in the Volga region. Vegetation changes in the province's agricultural lands between spring and summer are apparent in these images acquired on May 31 and July 18, 2002 (upper and lower image panels, respectively) by the Multi-angle Imaging SpectroRadiometer (MISR).The left-hand panels are natural color views acquired by MISR's vertical-viewing (nadir) camera. Less vegetation and more earth tones (indicative of bare soils) are apparent in the summer image (lower left). Farmers in the region utilize staggered sowing to help stabilize yields, and a number of different stages of crop maturity can be observed. The main crop is spring wheat, cultivated under non-irrigated conditions. A short growing season and relatively low and variable rainfall are the major limitations to production. Saratov city is apparent as the light gray pixels on the left (west) bank of the Volga River. Riparian vegetation along the Volga exhibits dark green hues, with some new growth appearing in summer.The right-hand panels are multi-angle composites created with red band data from MISR's 60-degree backward, nadir and 60-degree forward-viewing cameras displayed as red, green and blue respectively. In these images, color variations serve as a proxy for changes in angular reflectance, and the spring and summer views were processed identically to preserve relative variations in brightness between the two dates. Urban areas and vegetation along the Volga banks look similar in the two seasonal multi-angle composites. The agricultural areas, on the other hand, look strikingly different. This can be attributed to differences in brightness and texture between bare soil and vegetated land. The chestnut-colored soils in this region are brighter in MISR's red band than the vegetation. Because plants have vertical structure, the oblique cameras observe a greater proportion of vegetation relative to the nadir camera, which sees more soil. In spring, therefore, the scene is brightest in the vertical view and thus appears with an overall greenish hue. In summer, the soil characteristics play a greater role in governing the appearance of the scene, and the angular reflectance is now brighter at the oblique view angles (displayed as red and blue), thus imparting a pink color to much of the farmland and a purple color to areas along the banks of several narrow rivers. The unusual appearance of the clouds is due to geometric parallax which splits the imagery into spatially separated components as a consequence of their elevation above the surface.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. These images are a portion of the data acquired during Terra orbits 13033 and 13732, and cover an area of about 173 kilometers x 171 kilometers. They utilize data from blocks 49 to 50 within World Reference System-2 path 170.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Homography-based multiple-camera person-tracking
NASA Astrophysics Data System (ADS)
Turk, Matthew R.
2009-01-01
Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.
HOT CELL BUILDING, TRA632, INTERIOR. WRIGHT 3TON HOIST ON EAST ...
HOT CELL BUILDING, TRA-632, INTERIOR. WRIGHT 3-TON HOIST ON EAST SIDE OF CELL 2. SIGN AT LEFT OF VIEW SAYS, "...DO NOT BRING FISSILE MATERIAL INTO AREA WITHOUT APPROVAL." CAMERA FACES NORTHWEST. INL NEGATIVE NO. HD46-29-2. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
PROCESS WATER BUILDING, TRA605. FLASH EVAPORATORS ARE PLACED ON UPPER ...
PROCESS WATER BUILDING, TRA-605. FLASH EVAPORATORS ARE PLACED ON UPPER LEVEL OF EAST SIDE OF BUILDING. WALLS WILL BE FORMED AROUND THEM. WORKING RESERVOIR BEYOND. CAMERA FACING EASTERLY. EXHAUST AIR STACK IS UNDER CONSTRUCTION AT RIGHT OF VIEW. INL NEGATIVE NO. 2579. Unknown Photographer, 6/18/1951 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces
NASA Astrophysics Data System (ADS)
Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf
2016-06-01
The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.
Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.
Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-06-24
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.
Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras
Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji
2016-01-01
For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348
2015-08-20
This view from NASA Cassini spacecraft looks toward Saturn icy moon Dione, with giant Saturn and its rings in the background, just prior to the mission final close approach to the moon on August 17, 2015. At lower right is the large, multi-ringed impact basin named Evander, which is about 220 miles (350 kilometers) wide. The canyons of Padua Chasma, features that form part of Dione's bright, wispy terrain, reach into the darkness at left. Imaging scientists combined nine visible light (clear spectral filter) images to create this mosaic view: eight from the narrow-angle camera and one from the wide-angle camera, which fills in an area at lower left. The scene is an orthographic projection centered on terrain at 0.2 degrees north latitude, 179 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. North on Dione is up. The view was acquired at distances ranging from approximately 106,000 miles (170,000 kilometers) to 39,000 miles (63,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 35 degrees. Image scale is about 1,500 feet (450 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19650
Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery
NASA Technical Reports Server (NTRS)
Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr
2008-01-01
The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to that of an endo-scissor, but the linkage would be configured to enable adjustment of the camera angle instead of actuation of a scissor blade. It is envisioned that thicknesses of the tool shaft and the camera would be less than 4 mm, so that the camera-tipped tool could be swiftly inserted and withdrawn through a dime-size opening. Electronic cameras having dimensions of the order of millimeters are already commercially available, but their designs are not optimized for use in endoscopic brain surgery. The variety of potential endoscopic, thoracoscopic, and laparoscopic applications can be expected to increase as further development of electronic cameras yields further miniaturization and improvements in imaging performance.
Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)
NASA Astrophysics Data System (ADS)
Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.
1993-01-01
The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.
Fabrication of multi-focal microlens array on curved surface for wide-angle camera module
NASA Astrophysics Data System (ADS)
Pan, Jun-Gu; Su, Guo-Dung J.
2017-08-01
In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuddy-Walsh, SG; University of Ottawa Heart Institute; Wells, RG
2014-08-15
Myocardial perfusion imaging (MPI) with Single Photon Emission Computed Tomography (SPECT) is invaluable in the diagnosis and management of heart disease. It provides essential information on myocardial blood flow and ischemia. Multi-pinhole dedicated cardiac-SPECT cameras offer improved count sensitivity, and spatial and energy resolutions over parallel-hole camera designs however variable sensitivity across the field-of-view (FOV) can lead to position-dependent noise variations. Since MPI evaluates differences in the signal-to-noise ratio, noise variations in the camera could significantly impact the sensitivity of the test for ischemia. We evaluated the noise characteristics of GE Healthcare's Discovery NM530c camera with a goal of optimizingmore » the accuracy of our patient assessment and thereby improving outcomes. Theoretical sensitivity maps of the camera FOV, including attenuation effects, were estimated analytically based on the distance and angle between the spatial position of a given voxel and each pinhole. The standard deviation in counts, σ was inferred for each voxel position from the square root of the sensitivity mapped at that position. Noise was measured experimentally from repeated (N=16) acquisitions of a uniform spherical Tc-99m-water phantom. The mean (μ) and standard deviation (σ) were calculated for each voxel position in the reconstructed FOV. Noise increased ∼2.1× across a 12 cm sphere. A correlation of 0.53 is seen when experimental noise is compared with theory suggesting that ∼53% of the noise is attributed to the combined effects of attenuation and the multi-pinhole geometry. Further investigations are warranted to determine the clinical impact of the position-dependent noise variation.« less
HOT CELL BUILDING, TRA632, INTERIOR. HOT CELL NO. 1 (THE ...
HOT CELL BUILDING, TRA-632, INTERIOR. HOT CELL NO. 1 (THE FIRST BUILT) IN LABORATORY 101. CAMERA FACES SOUTHEAST. SHIELDED OPERATING WINDOWS ARE ON LEFT (NORTH) SIDE. OBSERVATION WINDOW IS AT LEFT OF VIEW (ON WEST SIDE). PLASTIC COVERS SHROUD MASTER/SLAVE MANIPULATORS AT WINDOWS IN LEFT OF VIEW. NOTE MINERAL OIL RESERVOIR ABOVE "CELL 1" SIGN, INDICATING LEVEL OF THE FLUID INSIDE THE THICK WINDOWS. HOT CELL HAS BEVELED CORNER BECAUSE A SQUARED CORNER WOULD HAVE SUPPLIED UNNECESSARY SHIELDING. NOTE PUMICE BLOCK WALL AT LEFT OF VIEW. INL NEGATIVE NO. HD46-28-1. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
LPT. Shield test facility assembly and test building (TAN646). East ...
LPT. Shield test facility assembly and test building (TAN-646). East facade of ebor helium wing addition. Camera facing west. Note asbestos-cement siding on stair enclosure and upper-level. Concrete siding at lower level. Metal stack. Monorail protrudes from upper level of south wall at left of view. INEEL negative no. HD-40-7-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Adapting Local Features for Face Detection in Thermal Image.
Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro
2017-11-27
A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
NASA Astrophysics Data System (ADS)
Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.
2012-10-01
We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.
Colorful Saturn, Getting Closer
2004-06-03
As Cassini coasts into the final month of its nearly seven-year trek, the serene majesty of its destination looms ahead. The spacecraft's cameras are functioning beautifully and continue to return stunning views from Cassini's position, 1.2 billion kilometers (750 million miles) from Earth and now 15.7 million kilometers (9.8 million miles) from Saturn. In this narrow angle camera image from May 21, 2004, the ringed planet displays subtle, multi-hued atmospheric bands, colored by yet undetermined compounds. Cassini mission scientists hope to determine the exact composition of this material. This image also offers a preview of the detailed survey Cassini will conduct on the planet's dazzling rings. Slight differences in color denote both differences in ring particle composition and light scattering properties. Images taken through blue, green and red filters were combined to create this natural color view. The image scale is 132 kilometers (82 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA06060
Dynamic Shape Capture of Free-Swimming Aquatic Life using Multi-view Stereo
NASA Astrophysics Data System (ADS)
Daily, David
2017-11-01
The reconstruction and tracking of swimming fish in the past has either been restricted to flumes, small volumes, or sparse point tracking in large tanks. The purpose of this research is to use an array of cameras to automatically track 50-100 points on the surface of a fish using the multi-view stereo computer vision technique. The method is non-invasive thus allowing the fish to swim freely in a large volume and to perform more advanced maneuvers such as rolling, darting, stopping, and reversing which have not been studied. The techniques for obtaining and processing the 3D kinematics and maneuvers of tuna, sharks, stingrays, and other species will be presented and compared. The National Aquarium and the Naval Undersea Warfare Center and.
Vehicle Re-Identification by Deep Hidden Multi-View Inference.
Zhou, Yi; Liu, Li; Shao, Ling
2018-07-01
Vehicle re-identification (re-ID) is an area that has received far less attention in the computer vision community than the prevalent person re-ID. Possible reasons for this slow progress are the lack of appropriate research data and the special 3D structure of a vehicle. Previous works have generally focused on some specific views (e.g., front); but, these methods are less effective in realistic scenarios, where vehicles usually appear in arbitrary views to cameras. In this paper, we focus on the uncertainty of vehicle viewpoint in re-ID, proposing two end-to-end deep architectures: the Spatially Concatenated ConvNet and convolutional neural network (CNN)-LSTM bi-directional loop. Our models exploit the great advantages of the CNN and long short-term memory (LSTM) to learn transformations across different viewpoints of vehicles. Thus, a multi-view vehicle representation containing all viewpoints' information can be inferred from the only one input view, and then used for learning to measure distance. To verify our models, we also introduce a Toy Car RE-ID data set with images from multiple viewpoints of 200 vehicles. We evaluate our proposed methods on the Toy Car RE-ID data set and the public Multi-View Car, VehicleID, and VeRi data sets. Experimental results illustrate that our models achieve consistent improvements over the state-of-the-art vehicle re-ID approaches.
Imaging Dot Patterns for Measuring Gossamer Space Structures
NASA Technical Reports Server (NTRS)
Dorrington, A. A.; Danehy, P. M.; Jones, T. W.; Pappa, R. S.; Connell, J. W.
2005-01-01
A paper describes a photogrammetric method for measuring the changing shape of a gossamer (membrane) structure deployed in outer space. Such a structure is typified by a solar sail comprising a transparent polymeric membrane aluminized on its Sun-facing side and coated black on the opposite side. Unlike some prior photogrammetric methods, this method does not require an artificial light source or the attachment of retroreflectors to the gossamer structure. In a basic version of the method, the membrane contains a fluorescent dye, and the front and back coats are removed in matching patterns of dots. The dye in the dots absorbs some sunlight and fluoresces at a longer wavelength in all directions, thereby enabling acquisition of high-contrast images from almost any viewing angle. The fluorescent dots are observed by one or more electronic camera(s) on the Sun side, the shade side, or both sides. Filters that pass the fluorescent light and suppress most of the solar spectrum are placed in front of the camera(s) to increase the contrast of the dots against the background. The dot image(s) in the camera(s) are digitized, then processed by use of commercially available photogrammetric software.
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.
Multi-microphone adaptive array augmented with visual cueing.
Gibson, Paul L; Hedin, Dan S; Davies-Venn, Evelyn E; Nelson, Peggy; Kramer, Kevin
2012-01-01
We present the development of an audiovisual array that enables hearing aid users to converse with multiple speakers in reverberant environments with significant speech babble noise where their hearing aids do not function well. The system concept consists of a smartphone, a smartphone accessory, and a smartphone software application. The smartphone accessory concept is a multi-microphone audiovisual array in a form factor that allows attachment to the back of the smartphone. The accessory will also contain a lower power radio by which it can transmit audio signals to compatible hearing aids. The smartphone software application concept will use the smartphone's built in camera to acquire images and perform real-time face detection using the built-in face detection support of the smartphone. The audiovisual beamforming algorithm uses the location of talking targets to improve the signal to noise ratio and consequently improve the user's speech intelligibility. Since the proposed array system leverages a handheld consumer electronic device, it will be portable and low cost. A PC based experimental system was developed to demonstrate the feasibility of an audiovisual multi-microphone array and these results are presented.
NASA Astrophysics Data System (ADS)
Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser
2017-05-01
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
A GRAND VIEW OF THE BIRTH OF 'HEFTY' STARS - 30 DORADUS NEBULA MONTAGE
NASA Technical Reports Server (NTRS)
2002-01-01
This picture, taken in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2), represents a sweeping view of the 30 Doradus Nebula. But Hubble's infrared camera - the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) - has probed deeper into smaller regions of this nebula to unveil the stormy birth of massive stars. The montages of images in the upper left and upper right represent this deeper view. Each square in the montages is 15.5 light-years (19 arcseconds) across. The brilliant cluster R136, containing dozens of very massive stars, is at the center of this image. The infrared and visible-light views reveal several dust pillars that point toward R136, some with bright stars at their tips. One of them, at left in the visible-light image, resembles a fist with an extended index finger pointing directly at R136. The energetic radiation and high-speed material emitted by the massive stars in R136 are responsible for shaping the pillars and causing the heads of some of them to collapse, forming new stars. The infrared montage at upper left is enlarged in an accompanying image. Credits for NICMOS montages: NASA/Nolan Walborn (Space Telescope Science Institute, Baltimore, Md.) and Rodolfo Barba' (La Plata Observatory, La Plata, Argentina) Credits for WFPC2 image: NASA/John Trauger (Jet Propulsion Laboratory, Pasadena, Calif.) and James Westphal (California Institute of Technology, Pasadena, Calif.)
Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.
Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo
2011-01-01
In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research
Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.
2013-01-01
Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926
Accuracy of Wearable Cameras to Track Social Interactions in Stroke Survivors.
Dhand, Amar; Dalton, Alexandra E; Luke, Douglas A; Gage, Brian F; Lee, Jin-Moo
2016-12-01
Social isolation after a stroke is related to poor outcomes. However, a full study of social networks on stroke outcomes is limited by the current metrics available. Typical measures of social networks rely on self-report, which is vulnerable to response bias and measurement error. We aimed to test the accuracy of an objective measure-wearable cameras-to capture face-to-face social interactions in stroke survivors. If accurate and usable in real-world settings, this technology would allow improved examination of social factors on stroke outcomes. In this prospective study, 10 stroke survivors each wore 2 wearable cameras: Autographer (OMG Life Limited, Oxford, United Kingdom) and Narrative Clip (Narrative, Linköping, Sweden). Each camera automatically took a picture every 20-30 seconds. Patients mingled with healthy controls for 5 minutes of 1-on-1 interactions followed by 5 minutes of no interaction for 2 hours. After the event, 2 blinded judges assessed whether photograph sequences identified interactions or noninteractions. Diagnostic accuracy statistics were calculated. A total of 8776 photographs were taken and adjudicated. In distinguishing interactions, the Autographer's sensitivity was 1.00 and specificity was .98. The Narrative Clip's sensitivity was .58 and specificity was 1.00. The receiver operating characteristic curves of the 2 devices were statistically different (Z = 8.26, P < .001). Wearable cameras can accurately detect social interactions of stroke survivors. Likely because of its large field of view, the Autographer was more sensitive than the Narrative Clip for this purpose. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
ETR COOLING TOWER. PUMP HOUSE (TRA645) IN SHADOW OF TOWER ...
ETR COOLING TOWER. PUMP HOUSE (TRA-645) IN SHADOW OF TOWER ON LEFT. AT LEFT OF VIEW, HIGH-BAY BUILDING IS ETR. ONE STORY ATTACHMENT IS ETR ELECTRICAL BUILDING. STACK AT RIGHT IS ETR STACK; MTR STACK IS TOWARD LEFT. CAMERA FACING NORTHEAST. INL NEGATIVE NO. 56-3799. Jack L. Anderson, 11/26/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
40. ARAIII Prototype assembly and evaluation building ARA630. East end ...
40. ARA-III Prototype assembly and evaluation building ARA-630. East end and south side of building. Camera facing west. Roof railing is part of demolition preparations. Building beyond ARA-622 is ARA-621. In left of view is reactor building. ARA-607 is low-roofed portion, while high-bay portion is ARA-608. Ineel photo no. 3-27. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holder, L.M. III; Holder, L.M. IV
1999-07-01
The project was designed by the Overland Partners Architectural Firm for Riverbend Church of Austin as an Auditorium for Sunday Services and a venue for special theatrical presentations for the church and the community as well. It is an amphitheater on a hillside overlooking the Colorado River Valley. The amphitheater was selected as the building form to keep the audience closer to the speaker. A 175 ft wide by 60 ft tall arched window was installed on the north face to allow the audience to see the panorama views of the tree covered hills on the other side of themore » valley in the Texas Hill Country. Although the design is quite effective in achieving the program goals, these characteristics make it difficult to achieve effective daylighting without glare for the audience and television cameras since both face the north glazing. The design team was faced with providing quality daylighting for the audience and television cameras from the wall behind the stage. Most television studios have carefully controlled lighting systems with the major lighting component from behind the cameras. Virtually all television facilities with daylight contributing to the production lighting are in a building with high shading coefficient glass producing illumination on all areas equally or almost all glass and daylighting from skylights and clearstories above. All television networks have requirements for control of the quality of the video images to parallel those conditions for the program to be aired.« less
Omnidirectional Underwater Camera Design and Calibration
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David
2015-01-01
This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707
Face detection assisted auto exposure: supporting evidence from a psychophysical study
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani
2010-01-01
Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
Uniscale multi-view registration using double dog-leg method
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan
2009-02-01
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig
2015-01-01
Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125 km2 in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743
2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...
2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows
NASA Astrophysics Data System (ADS)
Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.
2016-10-01
A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.
Instruments for Imaging from Far to Near
NASA Technical Reports Server (NTRS)
Mungas, Greg; Boynton, John; Sepulveda, Cesar
2009-01-01
The acronym CHAMP (signifying camera, hand lens, and microscope ) denotes any of several proposed optoelectronic instruments that would be capable of color imaging at working distances that could be varied continuously through a range from infinity down to several millimeters. As in any optical instrument, the magnification, depth of field, and spatial resolution would vary with the working distance. For example, in one CHAMP version, at a working distance of 2.5 m, the instrument would function as an electronic camera with a magnification of 1/100, whereas at a working distance of 7 mm, the instrument would function as a microscope/electronic camera with a magnification of 4.4. Moreover, as described below, when operating at or near the shortest-working-distance/highest-magnification combination, a CHAMP could be made to perform one or more spectral imaging functions. CHAMPs were originally intended to be used in robotic geological exploration of the Moon and Mars. The CHAMP concept also has potential for diverse terrestrial applications that could include remotely controlled or robotic geological exploration, prospecting, field microbiology, environmental surveying, and assembly- line inspection. A CHAMP (see figure) would include two lens cells: (1) a distal cell corresponding to the objective lens assembly of a conventional telescope or microscope and (2) a proximal cell that would contain the focusing camera lens assembly and the camera electronic image-detector chip, which would be of the active-pixel-sensor (APS) type. The distal lens cell would face outward from a housing, while the proximal lens cell would lie in a clean environment inside the housing. The proximal lens cell would contain a beam splitter that would enable simultaneous use of the imaging optics (that is, proximal and distal lens assemblies) for imaging and illumination of the field of view. The APS chip would be mounted on a focal plane on a side face of the beam splitter, while light for illuminating the field of view would enter the imaging optics via the end face of the beam splitter. The proximal lens cell would be mounted on a sled that could be translated along the optical axis for focus adjustment. The position of the CHAMP would initially be chosen at the desired working distance of the distal lens from (corresponding to an approximate desired magnification of) an object to be examined. During subsequent operation, the working distance would ordinarily remain fixed at the chosen value and the position of the proximal lens cell within the instrument would be adjusted for focus as needed.
NASA Astrophysics Data System (ADS)
Drass, Holger; Vanzi, Leonardo; Torres-Torriti, Miguel; Dünner, Rolando; Shen, Tzu-Chiang; Belmar, Francisco; Dauvin, Lousie; Staig, Tomás.; Antognini, Jonathan; Flores, Mauricio; Luco, Yerko; Béchet, Clémentine; Boettger, David; Beard, Steven; Montgomery, David; Watson, Stephen; Cabral, Alexandre; Hayati, Mahmoud; Abreu, Manuel; Rees, Phil; Cirasuolo, Michele; Taylor, William; Fairley, Alasdair
2016-08-01
The Multi-Object Optical and Near-infrared Spectrograph (MOONS) will cover the Very Large Telescope's (VLT) field of view with 1000 fibres. The fibres will be mounted on fibre positioning units (FPU) implemented as two-DOF robot arms to ensure a homogeneous coverage of the 500 square arcmin field of view. To accurately and fast determine the position of the 1000 fibres a metrology system has been designed. This paper presents the hardware and software design and performance of the metrology system. The metrology system is based on the analysis of images taken by a circular array of 12 cameras located close to the VLTs derotator ring around the Nasmyth focus. The system includes 24 individually adjustable lamps. The fibre positions are measured through dedicated metrology targets mounted on top of the FPUs and fiducial markers connected to the FPU support plate which are imaged at the same time. A flexible pipeline based on VLT standards is used to process the images. The position accuracy was determined to 5 μm in the central region of the images. Including the outer regions the overall positioning accuracy is 25 μm. The MOONS metrology system is fully set up with a working prototype. The results in parts of the images are already excellent. By using upcoming hardware and improving the calibration it is expected to fulfil the accuracy requirement over the complete field of view for all metrology cameras.
BOMBOLO: a Multi-Band, Wide-field, Near UV/Optical Imager for the SOAR 4m Telescope
NASA Astrophysics Data System (ADS)
Angeloni, R.; Guzmán, D.; Puzia, T. H.; Infante, L.
2014-10-01
BOMBOLO is a new multi-passband visitor instrument for SOAR observatory. The first fully Chilean instrument of its kind, it is a three-arms imager covering the near-UV and optical wavelengths. The three arms work simultaneously and independently, providing synchronized imaging capability for rapid astronomical events. BOMBOLO will be able to address largely unexplored events in the minute-to-second timescales, with the following leading science cases: 1) Simultaneous Multiband Flickering Studies of Accretion Phenomena; 2) Near UV/Optical Diagnostics of Stellar Evolutionary Phases; 3) Exoplanetary Transits and 4) Microlensing Follow-Up. BOMBOLO optical design consists of a wide field collimator feeding two dychroics at 390 and 550 nm. Each arm encompasses a camera, filter wheel and a science CCD230-42, imaging a 7 x 7 arcmin field of view onto a 2k x 2k image. The three CCDs will have different coatings to optimise the efficiencies of each camera. The detector controller to run the three cameras will be Torrent (the NOAO open-source system) and a PanView application will run the instrument and produce the data-cubes. The instrument is at Conceptual Design stage, having been approved by the SOAR Board of Directors as a visitor instrument in 2012 and having been granted full funding from CONICYT, the Chilean State Agency of Research, in 2013. The Design Phase is starting now and will be completed in late 2014, followed by a construction phase in 2015 and 2016A, with expected Commissioning in 2016B and 2017A.
MuSICa: the Multi-Slit Image Slicer for the est Spectrograph
NASA Astrophysics Data System (ADS)
Calcines, A.; López, R. L.; Collados, M.
2013-09-01
Integral field spectroscopy (IFS) is a technique that allows one to obtain the spectra of all the points of a bidimensional field of view simultaneously. It is being applied to the new generation of the largest night-time telescopes but it is also an innovative technique for solar physics. This paper presents the design of a new image slicer, MuSICa (Multi-Slit Image slicer based on collimator-Camera), for the integral field spectrograph of the 4-m aperture European Solar Telescope (EST). MuSICa is a multi-slit image slicer that decomposes an 80 arcsec2 field of view into slices of 50 μm and reorganizes it into eight slits of 0.05 arcsec width × 200 arcsec length. It is a telecentric system with an optical quality at diffraction limit compatible with the two modes of operation of the spectrograph: spectroscopic and spectro-polarimetric. This paper shows the requirements, technical characteristics and layout of MuSICa, as well as other studied design options.
1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...
1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
MTR WING, TRA604, INTERIOR. BASEMENT. INTERIOR VIEW FROM SAME LOCATION ...
MTR WING, TRA-604, INTERIOR. BASEMENT. INTERIOR VIEW FROM SAME LOCATION IN WEST CORRIDOR AS PHOTO ID-33-G-42 BUT CAMERA FACES SOUTH. SIGN ON DOOR FOR "PIPE TUNNEL" WARNS OF RADIOLOGICAL AND ASBESTOS HAZARDS. DOOR HAS METAL HASPS. SIGN ON OVERHEAD WASTE HEAT RECOVERY PIPES SAYS THEY CONTAIN "ASBESTOS FREE INSULATION." FIRE DOOR AT LEFT LEADS TO STAIRWAY TO FIRST FLOOR. DOOR AT RIGHT LEADS TO ROOM WHICH ONCE CONTAINED MTR LIBRARY. INL NEGATIVE NO. HD46-13-4. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...
7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
NASA Astrophysics Data System (ADS)
Ichikawa, Takashi; Obata, Tomokazu
2016-08-01
A design of the wide-field infrared camera (AIRC) for Antarctic 2.5m infrared telescope (AIRT) is presented. The off-axis design provides a 7'.5 ×7'. 5 field of view with 0".22 pixel-1 in the wavelength range of 1 to 5 μm for the simultaneous three-color bands using cooled optics and three 2048×2048 InSb focal plane arrays. Good image quality is obtained over the entire field of view with practically no chromatic aberration. The image size corresponds to the refraction limited for 2.5 m telescope at 2 μm and longer. To enjoy the stable atmosphere with extremely low perceptible water vapor (PWV), superb seeing quality, and the cadence of the polar winter at Dome Fuji on the Antarctic plateau, the camera will be dedicated to the transit observations of exoplanets. The function of a multi-object spectroscopic mode with low spectra resolution (R 50-100) will be added for the spectroscopic transit observation at 1-5 μm. The spectroscopic capability in the environment of extremely low PWV of Antarctica will be very effective for the study of the existence of water vapor in the atmosphere of super earths.
Design of an open-ended plenoptic camera for three-dimensional imaging of dusty plasmas
NASA Astrophysics Data System (ADS)
Sanpei, Akio; Tokunaga, Kazuya; Hayashi, Yasuaki
2017-08-01
Herein, the design of a plenoptic imaging system for three-dimensional reconstructions of dusty plasmas using an integral photography technique has been reported. This open-ended system is constructed with a multi-convex lens array and a typical reflex CMOS camera. We validated the design of the reconstruction system using known target particles. Additionally, the system has been applied to observations of fine particles floating in a horizontal, parallel-plate radio-frequency plasma. Furthermore, the system works well in the range of our dusty plasma experiment. We can identify the three-dimensional positions of dust particles from a single-exposure image obtained from one viewing port.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Fluctuations of Lake Eyre, South Australia
NASA Technical Reports Server (NTRS)
2002-01-01
Lake Eyre is a large salt lake situated between two deserts in one of Australia's driest regions. However, this low-lying lake attracts run-off from one of the largest inland drainage systems in the world. The drainage basin is very responsive to rainfall variations, and changes dramatically with Australia's inter-annual weather fluctuations. When Lake Eyre fills,as it did in 1989, it is temporarily Australia's largest lake, and becomes dense with birds, frogs and colorful plant life. The Lake responds to extended dry periods (often associated with El Nino events) by drying completely.These four images from the Multi-angle Imaging SpectroRadiometer contrast the lake area at the start of the austral summers of 2000 and 2002. The top two panels portray the region as it appeared on December 9, 2000. Heavy rains in the first part of 2000 caused both the north and south sections of the lake to fill partially and the northern part of the lake still contained significant standing water by the time these data were acquired. The bottom panels were captured on November 29, 2002. Rainfall during 2002 was significantly below average ( http://www.bom.gov.au/ ), although showers occurring in the week before the image was acquired helped alleviate this condition slightly.The left-hand panels portray the area as it appeared to MISR's vertical-viewing (nadir) camera, and are false-color views comprised of data from the near-infrared, green and blue channels. Here, wet and/or moist surfaces appear blue-green, since water selectively absorbs longer wavelengths such as near-infrared. The right-hand panels are multi-angle composites created with red band data from MISR's 60-degree forward, nadir and 60-degree backward-viewing cameras, displayed as red, green and blue, respectively. In these multi-angle composites, color variations serve as a proxy for changes in angular reflectance, and indicate textural properties of the surface related to roughness and/or moisture content.Data from the two dates were processed identically to preserve relative variations in brightness between them. Wet surfaces or areas with standing water appear green due to the effect of sunglint at the nadir camera view angle. Dry, salt encrusted parts of the lake appear bright white or gray. Purple areas have enhanced forward scattering, possibly as a result of surface moistness. Some variations exhibited by the multi-angle composites are not discernible in the nadir multi-spectral images and vice versa, suggesting that the combination of angular and spectral information is a more powerful diagnostic of surface conditions than either technique by itself.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 5194 and 15679. The panels cover an area of 146 kilometers x 122 kilometers, and utilize data from blocks 113 to 114 within World Reference System-2 path 100.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.91. ARAIII. GCRE reactor building (ARA608) at 48 percent completion. ...
91. ARA-III. GCRE reactor building (ARA-608) at 48 percent completion. Camera faces west end of building; shows roll-up door. High bay section on right view. Petro-chem heater stack extends above roof of low-bay section on left. Excavation for 13, 8 kv electrical conduit in foreground. January 20, 1959. Ineel photo no. 59-322. Photographer: Jack L. Anderson. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID
LPT. Aerial of low power test (TAN640 and 641) and ...
LPT. Aerial of low power test (TAN-640 and -641) and shield test (TAN-645 and -646) facilities. Camera facing north west. Low power test facility at right. Shield test facility at left. Flight engine test area in background at center left of view. Administrative and A&M areas at right. Photographer: Lowin. Date: February 24, 1965. INEEL negative no. 65-991 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...
3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
BigView Image Viewing on Tiled Displays
NASA Technical Reports Server (NTRS)
Sandstrom, Timothy
2007-01-01
BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong
2015-04-14
Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.
Researches on hazard avoidance cameras calibration of Lunar Rover
NASA Astrophysics Data System (ADS)
Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong
2017-11-01
Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.
NASA Astrophysics Data System (ADS)
Frouin, Robert; Deschamps, Pierre-Yves; Rothschild, Richard; Stephan, Edward; Leblanc, Philippe; Duttweiler, Fred; Ghaemi, Tony; Riedi, Jérôme
2006-12-01
The Monitoring Aerosols in the Ultraviolet Experiment (MAUVE) and the Short-Wave Infrared Polarimeter Experiment (SWIPE) instruments have been designed to collect, from a typical sun-synchronous polar orbit at 800 km altitude, global observations of the spectral, polarized, and directional radiance reflected by the earth-atmosphere system for a wide range of applications. Based on the heritage of the POLDER radiometer, the MAUVE/SWIPE instrument concept combines the merits of TOMS for observing in the ultra-violet, MISR for wide field-of-view range, MODIS, for multi-spectral aspects in the visible and near infrared, and the POLDER instrument for polarization. The instruments are camera systems with 2-dimensional detector arrays, allowing a 120-degree field-of-view with adequate ground resolution (i.e., 0.4 or 0.8 km at nadir) from satellite altitude. Multi-angle viewing is achieved by the along-track migration at spacecraft velocity of the 2-dimensional field-of-view. Between the cameras' optical assembly and detector array are two filter wheels, one carrying spectral filters, the other polarizing filters, allowing measurements of the first three Stokes parameters, I. Q, and V, of the incident radiation in 16 spectral bands optimally placed in the interval 350-2200 nm. The spectral range is 350-1050 nm for the MAUVE instrument and 1050-2200 nm for the SWIPE instrument. The radiometric requirements are defined to fully exploit the multi-angular, multi-spectral, and multi-polarized capability of the instruments. These include a wide dynamic range, a signal-to-noise ratio above 500 in all channels at maximum radiance level, i.e., when viewing a surface target of albedo equal to 1, and a noise-equivalent-differential reflectance better than 0.0005 at low signal level for a sun at zenith. To achieve daily global coverage, a pair of MAUVE and SWIPE instruments would be carried by each of two mini-satellites placed on interlaced orbits. The equator crossing time of the two satellites would be adjusted to allow simultaneous observations of the overlapping zone viewed from the two parallel orbits of the twin satellites. Using twin satellites instead of a single satellite would allow measurements in a more complete range of scattering angles. A MAUVE/SWIPE satellite mission would improve significantly the accuracy of ocean color observations from space, and will extend the retrieval of ocean optical properties to the ultra-violet, where they become very sensitive to detritus material and dissolved organic matter. It would also provide a complete description of the scattering and absorption properties of aerosol particles, as well as their size distribution and vertical distribution. Over land, the retrieved bidirectional reflectance function would allow a better classification of terrestrial vegetation and discrimination of surface types. The twin satellite concept, by providing stereoscopic capability, would offer the possibility to analyze the three-dimensional structure and radiative properties of cloud fields.
Dense 3D Face Alignment from 2D Video for Real-Time Use
Jeni, László A.; Cohn, Jeffrey F.; Kanade, Takeo
2018-01-01
To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org. PMID:29731533
Principal axis-based correspondence between multiple cameras for people tracking.
Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve
2006-04-01
Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.
Quality improving techniques for free-viewpoint DIBR
NASA Astrophysics Data System (ADS)
Do, Luat; Zinger, Sveta; de With, Peter H. N.
2010-02-01
Interactive free-viewpoint selection applied to a 3D multi-view signal is a possible attractive feature of the rapidly developing 3D TV media. This paper explores a new rendering algorithm that computes a free-viewpoint based on depth image warping between two reference views from existing cameras. We have developed three quality enhancing techniques that specifically aim at solving the major artifacts. First, resampling artifacts are filled in by a combination of median filtering and inverse warping. Second, contour artifacts are processed while omitting warping of edges at high discontinuities. Third, we employ a depth signal for more accurate disocclusion inpainting. We obtain an average PSNR gain of 3 dB and 4.5 dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared to recently published results. While experimenting with synthetic data, we observe that the rendering quality is highly dependent on the complexity of the scene. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by the rendering quality and not by coding.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
Surface Stereo Imager on Mars, Face-On
NASA Technical Reports Server (NTRS)
2008-01-01
This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long. The two 'eyes' of the SSI seen in this image can take photos to create three-dimensional views of the landing site. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.
2000-01-01
As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.
LIFTING THE VEIL OF DUST TO REVEAL THE SECRETS OF SPIRAL GALAXIES
NASA Technical Reports Server (NTRS)
2002-01-01
Astronomers have combined information from the NASA Hubble Space Telescope's visible- and infrared-light cameras to show the hearts of four spiral galaxies peppered with ancient populations of stars. The top row of pictures, taken by a ground-based telescope, represents complete views of each galaxy. The blue boxes outline the regions observed by the Hubble telescope. The bottom row represents composite pictures from Hubble's visible- and infrared-light cameras, the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Astronomers combined views from both cameras to obtain the true ages of the stars surrounding each galaxy's bulge. The Hubble telescope's sharper resolution allows astronomers to study the intricate structure of a galaxy's core. The galaxies are ordered by the size of their bulges. NGC 5838, an 'S0' galaxy, is dominated by a large bulge and has no visible spiral arms; NGC 7537, an 'Sbc' galaxy, has a small bulge and loosely wound spiral arms. Astronomers think that the structure of NGC 7537 is very similar to our Milky Way. The galaxy images are composites made from WFPC2 images taken with blue (4445 Angstroms) and red (8269 Angstroms) filters, and NICMOS images taken in the infrared (16,000 Angstroms). They were taken in June, July, and August of 1997. Credits for the ground-based images: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for WFPC2 and NICMOS composites: NASA, ESA, and Reynier Peletier (University of Nottingham, United Kingdom)
Depth-tunable three-dimensional display with interactive light field control
NASA Astrophysics Data System (ADS)
Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan
2016-07-01
A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.
Earth Observations taken by Expedition 26 crewmember
2011-02-20
ISS026-E-028384 (22 Feb. 2011) --- This high oblique night time view of the bottom two thirds of the Florida peninsula, photographed by an Expedition 26 crew member aboard the International Space Station at 220 miles above Earth, displays many of the state's well-lighted metropolitan areas. The crew member used a digital still camera equipped with an 80-mm lens to expose the frame. The station was above the Gulf of Mexico, facing eastward toward the Atlantic, at the time the photo was taken.
HOT CELL BUILDING, TRA632. WHILE STEEL BEAMS DEFINE FUTURE WALLS ...
HOT CELL BUILDING, TRA-632. WHILE STEEL BEAMS DEFINE FUTURE WALLS OF THE BUILDING, SHEET STEEL DEFINES THE HOT CELL "BOX" ITSELF. THREE OPERATING WINDOWS ON LEFT; ONE VIEWING WINDOW ON RIGHT. TUBES WILL CONTAIN SERVICE AND CONTROL LEADS. SPACE BETWEEN INNER AND OUTER BOX WALLS WILL BE FILLED WITH SHIELDED WINDOWS AND BARETES CONCRETE. CAMERA FACES SOUTHEAST. INL NEGATIVE NO. 7933. Unknown Photographer, ca. 5/1953 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
MTR WING A, TRA604, INTERIOR. BASEMENT. DETAIL OF A19 LAB ...
MTR WING A, TRA-604, INTERIOR. BASEMENT. DETAIL OF A-19 LAB AREA ALONG SOUTH WALL. SIGN ON FLOOR DIRECTS WORKERS TO OBTAIN WHOLE BODY FRISK UPON LEAVING AREA. SIGN ON EQUIPMENT IN CENTER OF VIEW REQUESTS WORKERS TO "NOTIFY HEALTH PHYSICS BEFORE WORKING ON THIS SYSTEM." CAMERA FACING SOUTHWEST. INL NEGATIVE NO. HD46-13-2. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
INFLIGHT - APOLLO X (CREW ACTIVITIES)
1969-05-18
S69-33999 (18 May 1969) --- A close-up view of the face of astronaut, Thomas P. Stafford, Apollo 10 commander, is seen in this color reproduction taken from the third television transmission made by the color television camera aboard the Apollo 10 spacecraft. When this picture was made the Apollo 10 spacecraft was on a trans-lunar course, and was already about 36,000 nautical miles from Earth. Also, aboard Apollo 10 were astronauts John W. Young, command module pilot, and Eugene A. Cernan, lunar module pilot.
Multispectral Snapshot Imagers Onboard Small Satellite Formations for Multi-Angular Remote Sensing
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Hewagama, Tilak; Georgiev, Georgi; Pasquale, Bert; Aslam, Shahid; Gatebe, Charles K.
2017-01-01
Multispectral snapshot imagers are capable of producing 2D spatial images with a single exposure at selected, numerous wavelengths using the same camera, therefore operate differently from push broom or whiskbroom imagers. They are payloads of choice in multi-angular, multi-spectral imaging missions that use small satellites flying in controlled formation, to retrieve Earth science measurements dependent on the targets Bidirectional Reflectance-Distribution Function (BRDF). Narrow fields of view are needed to capture images with moderate spatial resolution. This paper quantifies the dependencies of the imagers optical system, spectral elements and camera on the requirements of the formation mission and their impact on performance metrics such as spectral range, swath and signal to noise ratio (SNR). All variables and metrics have been generated from a comprehensive, payload design tool. The baseline optical parameters selected (diameter 7 cm, focal length 10.5 cm, pixel size 20 micron, field of view 1.15 deg) and snapshot imaging technologies are available. The spectral components shortlisted were waveguide spectrometers, acousto-optic tunable filters (AOTF), electronically actuated Fabry-Perot interferometers, and integral field spectrographs. Qualitative evaluation favored AOTFs because of their low weight, small size, and flight heritage. Quantitative analysis showed that waveguide spectrometers perform better in terms of achievable swath (10-90 km) and SNR (greater than 20) for 86 wavebands, but the data volume generated will need very high bandwidth communication to downlink. AOTFs meet the external data volume caps well as the minimum spectral (wavebands) and radiometric (SNR) requirements, therefore are found to be currently feasible in spite of lower swath and SNR.
32. DETAIL VIEW OF CAMERA PIT SOUTH OF LAUNCH PAD ...
32. DETAIL VIEW OF CAMERA PIT SOUTH OF LAUNCH PAD WITH CAMERA AIMED AT LAUNCH DECK; VIEW TO NORTHEAST. - Cape Canaveral Air Station, Launch Complex 17, Facility 28402, East end of Lighthouse Road, Cape Canaveral, Brevard County, FL
8. VAL CAMERA CAR, CLOSEUP VIEW OF 'FLARE' OR TRAJECTORY ...
8. VAL CAMERA CAR, CLOSE-UP VIEW OF 'FLARE' OR TRAJECTORY CAMERA ON SLIDING MOUNT. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Person re-identification over camera networks using multi-task distance metric learning.
Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng
2014-08-01
Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.
Nekton Interaction Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-15
The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less
STS-98 U.S. Lab Destiny rests in Atlantis' payload bay
NASA Technical Reports Server (NTRS)
2001-01-01
KENNEDY SPACE CENTER, Fla. -- In this view from Level 5, wing platform, of Atlantis''' payload bay, the U.S. Lab Destiny can be seen near the bottom. A key element in the construction of the International Space Station, Destiny is 28 feet long and weighs 16 tons. Destiny will be attached to the Unity node of the ISS using the Shuttle'''s robot arm, seen here on the left with the help of an elbow camera, facing left. Measurements of the elbow camera revealed only a one-inch clearance from the U.S. Lab payload, which is under review. Destiny will fly on STS-98, the seventh construction flight to the ISS. Launch of STS-98 is scheduled for Jan. 19 at 2:11 a.m. EST.
NASA Technical Reports Server (NTRS)
2006-01-01
As NASA's Mars Exploration Rover Spirit began collecting images for a 360-degree panorama of new terrain, the rover captured this view of a dark boulder with an interesting surface texture. The boulder sits about 40 centimeters (16 inches) tall on Martian sand about 5 meters (16 feet) away from Spirit. It is one of many dark, volcanic rock fragments -- many pocked with rounded holes called vesicles -- littering the slope of 'Low Ridge.' The rock surface facing the rover is similar in appearance to the surface texture on the outside of lava flows on Earth. Spirit took this approximately true-color image with the panoramic camera on the rover's 810th sol, or Martian day, of exploring Mars (April 13, 2006), using the camera's 753-nanometer, 535-nanometer, and 432-nanometer filters.Impact of multi-focused images on recognition of soft biometric traits
NASA Astrophysics Data System (ADS)
Chiesa, V.; Dugelay, J. L.
2016-09-01
In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.
NASA Astrophysics Data System (ADS)
de Villiers, Jason; Jermy, Robert; Nicolls, Fred
2014-06-01
This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.
Multi-target detection and positioning in crowds using multiple camera surveillance
NASA Astrophysics Data System (ADS)
Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng
2018-04-01
In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.
NASA Astrophysics Data System (ADS)
To, T.; Nguyen, D.; Tran, G.
2015-04-01
Heritage system of Vietnam has decline because of poor-conventional condition. For sustainable development, it is required a firmly control, space planning organization, and reasonable investment. Moreover, in the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used. With the potential of high-resolution, low-cost, large field of view, easiness, rapidity and completeness, the derivation of 3D metric information from Structure-and- Motion images is receiving great attention. In addition, heritage objects in form of 3D physical models are recorded not only for documentation issues, but also for historical interpretation, restoration, cultural and educational purposes. The study suggests the archaeological documentation of the "One Pilla" pagoda placed in Hanoi capital, Vietnam. The data acquired through digital camera Cannon EOS 550D, CMOS APS-C sensor 22.3 x 14.9 mm. Camera calibration and orientation were carried out by VisualSFM, CMPMVS (Multi-View Reconstruction) and SURE (Photogrammetric Surface Reconstruction from Imagery) software. The final result represents a scaled 3D model of the One Pilla Pagoda and displayed different views in MeshLab software.
Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe
2010-04-01
Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.
Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe
2012-01-01
Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812
PBF Reactor Building (PER620). Camera faces southeast. Concrete placement will ...
PBF Reactor Building (PER-620). Camera faces southeast. Concrete placement will leave opening for neutron camera to be installed later. Note vertical piping within rebar. Photographer: John Capek. Date: July 6, 1967. INEEL negative no. 67-3514 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
Acquisition of the spatial temperature distribution of rock faces by using infrared thermography
NASA Astrophysics Data System (ADS)
Beham, Michael; Rode, Matthias; Schnepfleitner, Harald; Sass, Oliver
2013-04-01
Rock temperature plays a central role for weathering and therefore influences the risk potential originating from rockfall processes. So far, for the acquisition of temperature mainly point-based measuring methods have been used and accordingly, two-dimensional temperature data is rare. To overcome this limitation, an infrared camera was used to collect and analyse data on the spatial temperature distribution on 10 x 10 m sections of rock faces in the Gesäuse (900m a.s.l.) and in the Dachsteingebirge (2700m a.s.l.) within the framework of the research project ROCKING ALPS (FWF-P24244). The advantage of infrared thermography to capture area-wide temperatures has hardly ever been used in this context. In order to investigate the differences between north-facing and south-facing rock faces at about the same period of time it was necessary to move the camera between the sites. The resulting offset of the time lapse infrared images made it necessary to develop a sophisticated methodology to rectify the captured images in order to create matching datasets for future analysis. With the relatively simple camera used, one of the main challenges was to find a way to convert the colour-scale or grey-scale values of the rectified image back to temperature values after the rectification process. The processing steps were mainly carried out with MATLAB. South-facing rock faces generally experienced higher temperatures and amplitudes compared to the north facing ones. In view of the spatial temperature distribution, the temperatures of shady areas were clearly below those of sunny ones, with the latter also showing the highest amplitudes. Joints and sun-shaded areas were characterised by attenuated diurnal temperature fluctuations closely paralleled to the air temperature. The temperature of protruding rock parts and of loose debris responded very quick to changes in radiation and air temperatures while massive rock reacted more slowly. The potential effects of temperature on weathering could only be assessed in a qualitative way by now. However, the variability of temperatures and amplitudes on a rather small and homogeneous section of a rockwall is surprisingly high which challenges any statements on weathering effectiveness based on point measurements. In simple terms, the use of infrared thermography has proven its value in the presented pilot study and is going to be a promising tool for research into rock weathering.
Two-Camera Acquisition and Tracking of a Flying Target
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter
2008-01-01
A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.
Detail of main entrance; camera facing southwest. Mare Island ...
Detail of main entrance; camera facing southwest. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior detail of tower space; camera facing southwest. Mare ...
Interior detail of tower space; camera facing southwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
NASA Astrophysics Data System (ADS)
Kutulakos, Kyros N.; O'Toole, Matthew
2015-03-01
Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.
Using Google Streetview Panoramic Imagery for Geoscience Education
NASA Astrophysics Data System (ADS)
De Paor, D. G.; Dordevic, M. M.
2014-12-01
Google Streetview is a feature of Google Maps and Google Earth that allows viewers to switch from map or satellite view to 360° panoramic imagery recorded close to the ground. Most panoramas are recorded by Google engineers using special cameras mounted on the roofs of cars. Bicycles, snowmobiles, and boats have also been used and sometimes the camera has been mounted on a backpack for off-road use by hikers and skiers or attached to scuba-diving gear for "Underwater Streetview (sic)." Streetview panoramas are linked together so that the viewer can change viewpoint by clicking forward and reverse buttons. They therefore create a 4-D touring effect. As part of the GEODE project ("Google Earth for Onsite and Distance Education"), we are experimenting with the use of Streetview imagery for geoscience education. Our web-based test application allows instructors to select locations for students to study. Students are presented with a set of questions or tasks that they must address by studying the panoramic imagery. Questions include identification of rock types, structures such as faults, and general geological setting. The student view is locked into Streetview mode until they submit their answers, whereupon the map and satellite views become available, allowing students to zoom out and verify their location on Earth. Student learning is scaffolded by automatic computerized feedback. There are lots of existing Streetview panoramas with rich geological content. Additionally, instructors and members of the general public can create panoramas, including 360° Photo Spheres, by stitching images taken with their mobiles devices and submitting them to Google for evaluation and hosting. A multi-thousand-dollar, multi-directional camera and mount can be purchased from DIY-streetview.com. This allows power users to generate their own high-resolution panoramas. A cheaper, 360° video camera is soon to be released according to geonaute.com. Thus there are opportunities for geoscience educators both to use existing Streetview imagery and to generate new imagery for specific locations of geological interest. The GEODE team includes the authors and: H. Almquist, C. Bentley, S. Burgin, C. Cervato, G. Cooper, P. Karabinos, T. Pavlis, J. Piatek, B. Richards, J. Ryan, R. Schott, K. St. John, B. Tewksbury, and S. Whitmeyer.
Detail of stairway at north elevation; camera facing southwest. ...
Detail of stairway at north elevation; camera facing southwest. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
Interior detail of lobby ceiling design; camera facing east. ...
Interior detail of lobby ceiling design; camera facing east. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Interior detail of stairway in tower; camera facing south. ...
Interior detail of stairway in tower; camera facing south. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Thermal-to-visible face recognition using partial least squares.
Hu, Shuowen; Choi, Jonghyun; Chan, Alex L; Schwartz, William Robson
2015-03-01
Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.
Opportunity View During Exploration in 'Duck Bay,' Sols 1506-1510 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11787 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11787 NASA Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings on the 1,506th through 1,510th Martian days, or sols, of Opportunity's mission on Mars (April 19-23, 2008). North is at the top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The site is within an alcove called 'Duck Bay' in the western portion of Victoria Crater. Victoria Crater is about 800 meters (half a mile) wide. Opportunity had descended into the crater at the top of Duck Bay 7 months earlier. By the time the rover acquired this view, it had examined rock layers inside the rim. Opportunity was headed for a closer look at the base of a promontory called 'Cape Verde,' the cliff at about the 2-o'clock position of this image, before leaving Victoria. The face of Cape Verde is about 6 meters (20 feet) tall. Just clockwise from Cape Verde is the main bowl of Victoria Crater, with sand dunes at the bottom. A promontory called 'Cabo Frio,' at the southern side of Duck Bay, stands near the 6-o'clock position of the image. This view is presented as a cylindrical-perspective projection with geometric seam correction.Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
Automatic multi-camera calibration for deployable positioning systems
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan
2012-06-01
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.
Interior detail of first floor lobby; camera facing northeast. ...
Interior detail of first floor lobby; camera facing northeast. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Detail of columns, cornice and eaves; camera facing southwest. ...
Detail of columns, cornice and eaves; camera facing southwest. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Detail of cupola on south wing; camera facing southeast. ...
Detail of cupola on south wing; camera facing southeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Patient and health care professional views and experiences of computer agent-supported health care.
Neville, Ron G; Greene, Alexandra C; Lewis, Sue
2006-01-01
To explore patient and health care professional (HCP) views towards the use of multi-agent computer systems in their GP practice. Qualitative analysis of in-depth interviews and analysis of transcriptions. Urban health centre in Dundee, Scotland. Five representative healthcare professionals and 11 patients. Emergent themes from interviews revealed participants' attitudes and beliefs, which were coded and indexed. Patients and HCPs had similar beliefs, attitudes and views towards the implementation of multi-agent systems (MAS). Both felt modern communication methods were useful to supplement, not supplant, face-to-face consultations between doctors and patients. This was based on the immense trust these patients placed in their doctors in this practice, which extended to trust in their choice of communication technology and security. Rapid access to medical information increased patients' sense of shared partnership and self-efficacy. Patients and HCPs expressed respect for each other's time and were keen to embrace technology that made interactions more efficient, including for the altruistic benefit of others less technically competent. Patients and HCPs welcomed the introduction of agent technology to the delivery of health care. Widespread use will depend more on the trust patients place in their own GP than on technological issues.
3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo
2017-01-01
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675
sUAS for Rapid Pre-Storm Coastal Characterization and Vulnerability Assessment
NASA Astrophysics Data System (ADS)
Brodie, K. L.; Slocum, R. K.; Spore, N.
2015-12-01
Open coast beaches and surf-zones are dynamic three-dimensional environments that can evolve rapidly on the time-scale of hours in response to changing environmental conditions. Up-to-date knowledge about the pre-storm morphology of the coast can be instrumental in making accurate predictions about coastal change and damage during large storms like Hurricanes and Nor'Easters. For example, alongshore variations in the shape of ephemeral sandbars along the coastline can focus wave energy, subjecting different stretches of coastline to significantly higher waves. Variations in beach slope and width can also alter wave runup, causing higher wave-induced water levels which can cause overwash or inlet breaching. Small Unmanned Aerial Systems (sUAS) offer a new capability to rapidly and inexpensively map vulnerable coastlines in advance of approaching storms. Here we present results from a prototype system that maps coastal topography and surf-zone morphology utilizing a multi-camera sensor. Structure-from-motion algorithms are used to generate topography and also constrain the trajectory of the sUAS. These data, in combination with mount boresight information, are used to rectify images from ocean-facing cameras. Images from all cameras are merged to generate a wide field of view allowing up to 5 minutes of continuous imagery time-series to be collected as the sUAS transits the coastline. Water imagery is then analyzed using wave-kinematics algorithms to provide information on surf-zone bathymetry. To assess this methodology, the absolute and relative accuracy of topographic data are evaluated in relation to simultaneously collected terrestrial lidar data. Ortho-rectification of water imagery is investigated using visible fixed targets installed in the surf-zone, and through comparison to stationary tower-based imagery. Future work will focus on evaluating how topographic and bathymetric data from this sUAS approach can be used to update forcing parameters in both empirical and numerical models predicting coast inundation and erosion in advance of storms.
NASA Astrophysics Data System (ADS)
Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.
2014-06-01
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.
NASA Technical Reports Server (NTRS)
2000-01-01
MISR images of tropical northern Australia acquired on June 1, 2000 (Terra orbit 2413) during the long dry season. Left: color composite of vertical (nadir) camera blue, green, and red band data. Right: multi-angle composite of red band data only from the cameras viewing 60 degrees aft, 60 degrees forward, and nadir. Color and contrast have been enhanced to accentuate subtle details. In the left image, color variations indicate how different parts of the scene reflect light differently at blue, green, and red wavelengths; in the right image color variations show how these same scene elements reflect light differently at different angles of view. Water appears in blue shades in the right image, for example, because glitter makes the water look brighter at the aft camera's view angle. The prominent inland water body is Lake Argyle, the largest human-made lake in Australia, which supplies water for the Ord River Irrigation Area and the town of Kununurra (pop. 6500) just to the north. At the top is the southern edge of Joseph Bonaparte Gulf; the major inlet at the left is Cambridge Gulf, the location of the town of Wyndham (pop. 850), the port for this region. This area is sparsely populated, and is known for its remote, spectacular mountains and gorges. Visible along much of the coastline are intertidal mudflats of mangroves and low shrubs; to the south the terrain is covered by open woodland merging into open grassland in the lower half of the pictures.
MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Interior detail of main entry with railroad tracks; camera facing ...
Interior detail of main entry with railroad tracks; camera facing east. - Mare Island Naval Shipyard, Mechanics Shop, Waterfront Avenue, west side between A Street & Third Street, Vallejo, Solano County, CA
DETAIL OF LAMP ABOVE SOUTH SIDE ENTRANCE; CAMERA FACING EAST ...
DETAIL OF LAMP ABOVE SOUTH SIDE ENTRANCE; CAMERA FACING EAST - Mare Island Naval Shipyard, Bachelor Enlisted Quarters & Offices, Walnut Avenue, east side between D Street & C Street, Vallejo, Solano County, CA
Detail of main doors on east elevation; camera facing west. ...
Detail of main doors on east elevation; camera facing west. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Detail of main hall porch on east elevation; camera facing ...
Detail of main hall porch on east elevation; camera facing west. - Mare Island Naval Shipyard, Wilderman Hall, Johnson Lane, north side adjacent to (south of) Hospital Complex, Vallejo, Solano County, CA
Detail of central portion of southeast elevation; camera facing west. ...
Detail of central portion of southeast elevation; camera facing west. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Detail of windows at center of west elevation; camera facing ...
Detail of windows at center of west elevation; camera facing east. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
Detail of balcony and windows on west elevation; camera facing ...
Detail of balcony and windows on west elevation; camera facing northeast. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA
Detail of main entry on east elevation; camera facing west. ...
Detail of main entry on east elevation; camera facing west. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Detail of south wing south elevation wall section; camera facing ...
Detail of south wing south elevation wall section; camera facing northwest - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
Detail of large industrial doors on north elevation; camera facing ...
Detail of large industrial doors on north elevation; camera facing south. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA
NASA Technical Reports Server (NTRS)
2001-01-01
Surface brightness contrasts accentuated by a thin layer of snow enable a network of rivers, roads, and farmland boundaries to stand out clearly in these MISR images of southeastern Saskatchewan and southwestern Manitoba. The lefthand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The righthand image is a multi-angle false-color view made from the red band data of the 60-degree aftward camera, the nadir camera, and the 60-degree forward camera. In each image, the selected channels are displayed as red, green, and blue, respectively. The data were acquired April 17, 2001 during Terra orbit 7083, and cover an area measuring about 285 kilometers x 400 kilometers. North is at the top.
The junction of the Assiniboine and Qu'Apelle Rivers in the bottom part of the images is just east of the Saskatchewan-Manitoba border. During the growing season, the rich, fertile soils in this area support numerous fields of wheat, canola, barley, flaxseed, and rye. Beef cattle are raised in fenced pastures. To the north, the terrain becomes more rocky and forested. Many frozen lakes are visible as white patches in the top right. The narrow linear, north-south trending patterns about a third of the way down from the upper right corner are snow-filled depressions alternating with vegetated ridges, most probably carved by glacial flow.In the lefthand image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the righthand image, several forested regions are clearly visible in green hues. Since this is a multi-angle composite, the green arises not from the color of the leaves but from the architecture of the surface cover. Progressing southeastward along the Manitoba Escarpment, the forested areas include the Pasquia Hills, the Porcupine Hills, Duck Mountain Provincial Park, and Riding Mountain National Park. The forests are brighter in the nadir than at the oblique angles, probably because more of the snow-covered surface is visible in the gaps between the trees. In contrast, the valley between the Pasquia and Porcupine Hills near the top of the images appears bright red in the lefthand image (indicating high vegetation abundance) but shows a mauve color in the multi-angle view. This means that it is darker in the nadir than at the oblique angles. Examination of imagery acquired after the snow has melted should establish whether this difference is related to the amount of snow on the surface or is indicative of a different type of vegetation structure.Saskatchewan and Manitoba are believed to derive their names from the Cree words for the winding and swift-flowing waters of the Saskatchewan River and for a narrows on Lake Manitoba where the roaring sound of wind and water evoked the voice of the Great Spirit. They are two of Canada's Prairie Provinces; Alberta is the third.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.360 deg Camera Head for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.
2012-01-01
The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.
2017-09-12
NASA's Cassini spacecraft gazed toward the northern hemisphere of Saturn to spy subtle, multi-hued bands in the clouds there. This view looks toward the terminator -- the dividing line between night and day -- at lower left. The sun shines at low angles along this boundary, in places highlighting vertical structure in the clouds. Some vertical relief is apparent in this view, with higher clouds casting shadows over those at lower altitude. Images taken with the Cassini spacecraft narrow-angle camera using red, green and blue spectral filters were combined to create this natural-color view. The images were acquired on Aug. 31, 2017, at a distance of approximately 700,000 miles (1.1 million kilometers) from Saturn. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21888
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-356, 10 May 2003
This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a thick mantle of dust covering lava flows north of Pavonis Mons so well that the flows are no longer visible. Flows are known to occur here because of the proximity to the volcano, and such flows normally have a very rugged surface. Fine dust, however, has settled out of the atmosphere over time and obscured the flows from view. The cliff at the top of the image faces north (up), the cliff in the middle of the image faces south (down), and the rugged slope at the bottom of the image faces north (up). The dark streak at the center-left was probably caused by an avalanche of dust sometime in the past few decades. The image is located near 4.1oN, 111.3oW. Sunlight illuminates the scene from the right/lower right.College Students' Appreciative Attitudes toward Atheists
ERIC Educational Resources Information Center
Bowman, Nicholas A.; Rockenbach, Alyssa N.; Mayhew, Matthew J.; Riggers-Piehl, Tiffani A.; Hudson, Tara D.
2017-01-01
Atheists are often marginalized in discussions of religious and spiritual pluralism on college campuses and beyond. As with other minority worldview groups, atheists face challenges with hostile campus climates and misunderstanding of their views. The present study used a large, multi-institutional sample to explore predictors of non-atheist…
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
Web Camera Use of Mothers and Fathers When Viewing Their Hospitalized Neonate.
Rhoads, Sarah J; Green, Angela; Gauss, C Heath; Mitchell, Anita; Pate, Barbara
2015-12-01
Mothers and fathers of neonates hospitalized in a neonatal intensive care unit (NICU) differ in their experiences related to NICU visitation. To describe the frequency and length of maternal and paternal viewing of their hospitalized neonates via a Web camera. A total of 219 mothers and 101 fathers used the Web camera that allows 24/7 NICU viewing from September 1, 2010, to December 31, 2012, which included 40 mother and father dyads. We conducted a review of the Web camera's Web site log-on records in this nonexperimental, descriptive study. Mothers and fathers had a significant difference in the mean number of log-ons to the Web camera system (P = .0293). Fathers virtually visited the NICU less often than mothers, but there was not a statistical difference between mothers and fathers in terms of the mean total number of minutes viewing the neonate (P = .0834) or in the maximum number of minutes of viewing in 1 session (P = .6924). Patterns of visitations over time were not measured. Web camera technology could be a potential intervention to aid fathers in visiting their neonates. Both parents should be offered virtual visits using the Web camera and oriented regarding how to use the Web camera. These findings are important to consider when installing Web cameras in a NICU. Future research should continue to explore Web camera use in NICUs.
Interior detail of main stairway from first floor; camera facing ...
Interior detail of main stairway from first floor; camera facing west. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
Interior detail of arched doorway at second floor; camera facing ...
Interior detail of arched doorway at second floor; camera facing north. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA
INTERIOR DETAIL OF STAIRWAY AT SOUTH WING ENTRANCE; CAMERA FACING ...
INTERIOR DETAIL OF STAIRWAY AT SOUTH WING ENTRANCE; CAMERA FACING SOUTH - Mare Island Naval Shipyard, Bachelor Enlisted Quarters & Offices, Walnut Avenue, east side between D Street & C Street, Vallejo, Solano County, CA
Effects of camera location on the reconstruction of 3D flare trajectory with two cameras
NASA Astrophysics Data System (ADS)
Özsaraç, Seçkin; Yeşilkaya, Muhammed
2015-05-01
Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.
NIKA2, a dual-band millimetre camera on the IRAM 30 m telescope to map the cold universe
NASA Astrophysics Data System (ADS)
Désert, F.-X.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Macías-Pérez, J. F.; Maury, A.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Rodriguez, L.; Romero, C.; Roussel, H.; Ruppin, F.; Soler, J.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.
2016-12-01
A consortium led by Institut Néel (Grenoble) has just finished installing a new powerful millimetre camera NIKA2 on the IRAM 30 m telescope. It has an instantaneous field-of-view of 6.5 arcminutes at both 1.2 and 2.0 mm with polarimetric capabilities at 1.2 mm. NIKA2 provides a near diffraction-limited angular resolution (resp. 12 and 18 arcseconds). The 3 detector arrays are made of more than 1000 KIDs each. KIDs are new superconducting devices emerging as an alternative to bolometers. The commissionning is ongoing in 2016 with a likely opening to the IRAM community in early 2017. NIKA2 is a very promising multi-purpose instrument which will enable many scientific discoveries in the coming decade.
Spirit Beholds Bumpy Boulder (False Color)
NASA Technical Reports Server (NTRS)
2006-01-01
As NASA's Mars Exploration Rover Spirit began collecting images for a 360-degree panorama of new terrain, the rover captured this view of a dark boulder with an interesting surface texture. The boulder sits about 40 centimeters (16 inches) tall on Martian sand about 5 meters (16 feet) away from Spirit. It is one of many dark, volcanic rock fragments -- many pocked with rounded holes called vesicles -- littering the slope of 'Low Ridge.' The rock surface facing the rover is similar in appearance to the surface texture on the outside of lava flows on Earth. Spirit took this false-color image with the panoramic camera on the rover's 810th sol, or Martian day, of exploring Mars (April 13, 2006). This image is a false-color rendering using camera's 753-nanometer, 535-nanometer, and 432-nanometer filters.The Texas Thermal Interface: A real-time computer interface for an Inframetrics infrared camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storek, D.J.; Gentle, K.W.
1996-03-01
The Texas Thermal Interface (TTI) offers an advantageous alternative to the conventional video path for computer analysis of infrared images from Inframetrics cameras. The TTI provides real-time computer data acquisition of 48 consecutive fields (version described here) with 8-bit pixels. The alternative requires time-consuming individual frame grabs from video tape with frequent loss of resolution in the D/A/D conversion. Within seconds after the event, the TTI temperature files may be viewed and processed to infer heat fluxes or other quantities as needed. The system cost is far less than commercial units which offer less capability. The system was developed formore » and is being used to measure heat fluxes to the plasma-facing components in a tokamak. {copyright} {ital 1996 American Institute of Physics.}« less
Neptune Great Dark Spot in High Resolution
1999-08-30
This photograph shows the last face on view of the Great Dark Spot that Voyager will make with the narrow angle camera. The image was shuttered 45 hours before closest approach at a distance of 2.8 million kilometers (1.7 million miles). The smallest structures that can be seen are of an order of 50 kilometers (31 miles). The image shows feathery white clouds that overlie the boundary of the dark and light blue regions. The pinwheel (spiral) structure of both the dark boundary and the white cirrus suggest a storm system rotating counterclockwise. Periodic small scale patterns in the white cloud, possibly waves, are short lived and do not persist from one Neptunian rotation to the next. This color composite was made from the clear and green filters of the narrow-angle camera. http://photojournal.jpl.nasa.gov/catalog/PIA00052
Analysis of calibration accuracy of cameras with different target sizes for large field of view
NASA Astrophysics Data System (ADS)
Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan
2018-03-01
Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
A&M. Hot liquid waste building (TAN616) under construction. Camera facing ...
A&M. Hot liquid waste building (TAN-616) under construction. Camera facing northeast. Date: November 25, 1953. INEEL negative no. 9232 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
MuSICa at GRIS: a prototype image slicer for EST at GREGOR
NASA Astrophysics Data System (ADS)
Calcines, A.; Collados, M.; López, R. L.
2013-05-01
This communication presents a prototype image slicer for the 4-m European Solar Telescope (EST) designed for the spectrograph of the 1.5-m GREGOR solar telescope (GRIS). The design of this integral field unit has been called MuSICa (Multi-Slit Image slicer based on collimator-Camera). It is a telecentric system developed specifically for the integral field, high resolution spectrograph of EST and presents multi-slit capability, reorganizing a bidimensional field of view of 80 arcsec^{2} into 8 slits, each one of them with 200 arcsec length × 0.05 arcsec width. It minimizes the number of optical components needed to fulfil this multi-slit capability, three arrays of mirrors: slicer, collimator and camera mirror arrays (the first one flat and the other two spherical). The symmetry of the layout makes it possible to overlap the pupil images associated to each part of the sliced entrance field of view. A mask with only one circular aperture is placed at the pupil position. This symmetric characteristic offers some advantages: facilitates the manufacturing process, the alignment and reduces the costs. In addition, it is compatible with two modes of operation: spectroscopic and spectro-polarimetric, offering a great versatility. The optical quality of the system is diffraction-limited. The prototype will improve the performances of GRIS at GREGOR and is part of the feasibility study of the integral field unit for the spectrographs of EST. Although MuSICa has been designed as a solar image slicer, its concept can also be applied to night-time astronomical instruments (Collados et al. 2010, Proc. SPIE, Vol. 7733, 77330H; Collados et al. 2012, AN, 333, 901; Calcines et al. 2010, Proc. SPIE, Vol. 7735, 77351X)
ENGINEERING TEST REACTOR, TRA642. CONTEXTUAL VIEW ORIENTATING ETR TO MTR. ...
ENGINEERING TEST REACTOR, TRA-642. CONTEXTUAL VIEW ORIENTATING ETR TO MTR. CAMERA IS ON ROOF OF MTR BUILDING AND FACES DUE SOUTH. MTR SERVICE BUILDING, TRA-635, IN LOWER RIGHT CORNER. STEEL FRAMES SHOW BUILDINGS TO BE ATTACHED TO ETR BUILDING. HIGH-BAY SECTION IN CENTER IS REACTOR BUILDING. TWO-STORY CONTROL ROOM AND OFFICE BUILDING, TRA-647, IS BETWEEN IT AND MTR SERVICE BUILDING. STRUCTURE TO THE LEFT (WITH NO FRAMING YET) IS COMPRESSOR BUILDING, TRA-643, AND BEYOND IT WILL BE HEAT EXCHANGER BUILDING, TRA-644, GREAT SOUTHERN BUTTE ON HORIZON. INL NEGATIVE NO. 56-2382. Jack L. Anderson, Photographer, 6/10/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
NASA Technical Reports Server (NTRS)
2004-01-01
This animation shows the view from the front hazard avoidance cameras on the Mars Exploration Rover Spirit as the rover turns 45 degrees clockwise. This maneuver is the first step in a 3-point turn that will rotate the rover 115 degrees to face west. The rover must make this turn before rolling off the lander because airbags are blocking it from exiting off the front lander petal. Before this crucial turn could take place, engineers instructed the rover to cut the final cord linking it to the lander. The turn took around 30 minutes to complete.
NASA Technical Reports Server (NTRS)
2004-01-01
This animation shows the view from the rear hazard avoidance cameras on the Mars Exploration Rover Spirit as the rover turns 45 degrees clockwise. This maneuver is the first step in a 3-point turn that will rotate the rover 115 degrees to face west. The rover must make this turn before rolling off the lander because airbags are blocking it from exiting from the front lander petal. Before this crucial turn took place, engineers instructed the rover to cut the final cord linking it to the lander. The turn took around 30 minutes to complete.
Scotti, Filippo; Roquemore, A L; Soukhanovskii, V A
2012-10-01
A pair of two dimensional fast cameras with a wide angle view (allowing a full radial and toroidal coverage of the lower divertor) was installed in the National Spherical Torus Experiment in order to monitor non-axisymmetric effects. A custom polar remapping procedure and an absolute photometric calibration enabled the easier visualization and quantitative analysis of non-axisymmetric plasma material interaction (e.g., strike point splitting due to application of 3D fields and effects of toroidally asymmetric plasma facing components).
2015-03-30
After a couple of years in high-inclination orbits that limited its ability to encounter Saturn's moons, NASA's Cassini spacecraft returned to Saturn's equatorial plane in March 2015. As a prelude to its return to the realm of the icy satellites, the spacecraft had its first relatively close flyby of an icy moon (apart from Titan) in almost two years on Feb. 9. During this encounter Cassini's cameras captured images of the icy moon Rhea, as shown in these in two image mosaics. The views were taken about an hour and a half apart as Cassini drew closer to Rhea. Images taken using clear, green, infrared and ultraviolet spectral filters were combined to create these enhanced color views, which offer an expanded range of the colors visible to human eyes in order to highlight subtle color differences across Rhea's surface. The moon's surface is fairly uniform in natural color. The image at right represents one of the highest resolution color views of Rhea released to date. A larger, monochrome mosaic is available in PIA07763. Both views are orthographic projections facing toward terrain on the trailing hemisphere of Rhea. An orthographic view is most like the view seen by a distant observer looking through a telescope. The views have been rotated so that north on Rhea is up. The smaller view at left is centered at 21 degrees north latitude, 229 degrees west longitude. Resolution in this mosaic is 450 meters (1,476 feet) per pixel. The images were acquired at a distance that ranged from about 51,200 to 46,600 miles (82,100 to 74,600 kilometers) from Rhea. The larger view at right is centered at 9 degrees north latitude, 254 degrees west longitude. Resolution in this mosaic is 300 meters (984 feet) per pixel. The images were acquired at a distance that ranged from about 36,000 to 32,100 miles (57,900 to 51,700 kilometers) from Rhea. The mosaics each consist of multiple narrow-angle camera (NAC) images with data from the wide-angle camera used to fill in areas where NAC data was not available. The image was produced by Heike Rosenberg and Tilmann Denk at Freie Universität in Berlin, Germany. http://photojournal.jpl.nasa.gov/catalog/PIA19057