Sample records for camera car view

  1. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  2. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  3. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  4. 8. VAL CAMERA CAR, CLOSEUP VIEW OF 'FLARE' OR TRAJECTORY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VAL CAMERA CAR, CLOSE-UP VIEW OF 'FLARE' OR TRAJECTORY CAMERA ON SLIDING MOUNT. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  5. 10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  7. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.

  8. Fisheye camera around view monitoring system

    NASA Astrophysics Data System (ADS)

    Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen

    2018-04-01

    360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.

  9. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. View of structures at rear of parcel with 12' scale ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of structures at rear of parcel with 12' scale (in tenths). From right: edge of Round House, Pencil house, Shell House, edge of School House. Heart Shrine made from mortared car headlights at frame left. Camera facing east. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  12. 5. VAL CAMERA CAR, DETAIL OF HOIST AT SIDE OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. VAL CAMERA CAR, DETAIL OF HOIST AT SIDE OF BRIDGE AND ENGINE CAR ON TRACKS, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  13. 9. COMPLETED ROLLING CAMERA CAR ON RAILROAD TRACK AND BRIDGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. COMPLETED ROLLING CAMERA CAR ON RAILROAD TRACK AND BRIDGE LOOKING WEST, APRIL 26, 1948. (ORIGINAL PHOTOGRAPH IN POSSESSION OF DAVE WILLIS, SAN DIEGO, CALIFORNIA.) - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. 13. 22'X34' original vellum, VariableAngle Launcher, 'SIDEVIEW CAMERA CAR TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. 22'X34' original vellum, Variable-Angle Launcher, 'SIDEVIEW CAMERA CAR TRACK DETAILS' drawn at 1/4'=1'-0' (BUORD Sketch # 208078, PAPW 908). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  15. A Real-Time Augmented Reality System to See-Through Cars.

    PubMed

    Rameau, Francois; Ha, Hyowon; Joo, Kyungdon; Choi, Jinsoo; Park, Kibaek; Kweon, In So

    2016-11-01

    One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicle's driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.

  16. Video monitoring system for car seat

    NASA Technical Reports Server (NTRS)

    Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)

    2004-01-01

    A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.

  17. Vehicle Re-Identification by Deep Hidden Multi-View Inference.

    PubMed

    Zhou, Yi; Liu, Li; Shao, Ling

    2018-07-01

    Vehicle re-identification (re-ID) is an area that has received far less attention in the computer vision community than the prevalent person re-ID. Possible reasons for this slow progress are the lack of appropriate research data and the special 3D structure of a vehicle. Previous works have generally focused on some specific views (e.g., front); but, these methods are less effective in realistic scenarios, where vehicles usually appear in arbitrary views to cameras. In this paper, we focus on the uncertainty of vehicle viewpoint in re-ID, proposing two end-to-end deep architectures: the Spatially Concatenated ConvNet and convolutional neural network (CNN)-LSTM bi-directional loop. Our models exploit the great advantages of the CNN and long short-term memory (LSTM) to learn transformations across different viewpoints of vehicles. Thus, a multi-view vehicle representation containing all viewpoints' information can be inferred from the only one input view, and then used for learning to measure distance. To verify our models, we also introduce a Toy Car RE-ID data set with images from multiple viewpoints of 200 vehicles. We evaluate our proposed methods on the Toy Car RE-ID data set and the public Multi-View Car, VehicleID, and VeRi data sets. Experimental results illustrate that our models achieve consistent improvements over the state-of-the-art vehicle re-ID approaches.

  18. Situational awareness for unmanned ground vehicles in semi-structured environments

    NASA Astrophysics Data System (ADS)

    Goodsell, Thomas G.; Snorrason, Magnus; Stevens, Mark R.

    2002-07-01

    Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.

  19. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas

    PubMed Central

    2018-01-01

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230

  20. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.

    PubMed

    Gakne, Paul Verlaine; O'Keefe, Kyle

    2018-04-17

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.

  1. Treating Osteoarthritis of the Knee Popular: Supplements Don't Work

    MedlinePlus

    ... Strollers Cars New & Used Cars Research Car Buying & Pricing Maintenance & Repair Tires Electronics Cameras Laptops Printers TVs ... SUVs More Ready to Shop All Car Buying & Pricing Advice Best New Car Deals Build & Buy Car ...

  2. Efficient large-scale graph data optimization for intelligent video surveillance

    NASA Astrophysics Data System (ADS)

    Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.

  3. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  4. Visibility of children behind 2010-2013 model year passenger vehicles using glances, mirrors, and backup cameras and parking sensors.

    PubMed

    Kidd, David G; Brethwaite, Andrew

    2014-05-01

    This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Optical digitizing and strategies to combine different views of an optical sensor

    NASA Astrophysics Data System (ADS)

    Duwe, Hans P.

    1997-09-01

    Non-contact digitization of objects and surfaces with optical sensors based on fringe or pattern projection in combination with a CCD-camera allows a representation of surfaces with pointclouds equals x, y, z data points. To digitize the total surface of an object, it is necessary to combine the different measurement data obtained by the optical sensor from different views. Depending on the size of the object and the required accuracy of the measured data, different sensor set-ups with handling system or a combination of linear and rotation axes are described. Furthermore, strategies to match the overlapping pointclouds of a digitized object are introduced. This is very important especially for the digitization of large objects like 1:1 car models, etc. With different sensor sizes, it is possible to digitize small objects like teeth, crowns, inlays, etc. with an overall accuracy of 20 micrometer as well as large objects like car models, with a total accuracy of 0.5 mm. The various applications in the field of optical digitization are described.

  6. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  7. 17. CABLE CAR #22, VIEW SHOWING CAR ROUNDING CORNER IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. CABLE CAR #22, VIEW SHOWING CAR ROUNDING CORNER IN LOADING AREA NEXT TO CAR DUMP AND CAR DUMP BUILDING - Pennsylvania Railroad, Canton Coal Pier, Clinton Street at Keith Avenue (Canton area), Baltimore, Independent City, MD

  8. Solid Rocket Boosters Separation

    NASA Technical Reports Server (NTRS)

    1982-01-01

    This view, taken by a motion picture tracking camera for the STS-3 mission, shows both left and right solid rocket boosters (SRB's) at the moment of separation from the external tank (ET). After impact to the ocean, they were retrieved and refurbished for reuse. The Shuttle's SRB's and solid rocket motors (SRM's) are the largest ever built and the first designed for refurbishment and reuse. Standing nearly 150-feet high, the twin boosters provide the majority of thrust for the first two minutes of flight, about 5.8 million pounds. That is equivalent to 44 million horsepower, or the combined power of 400,000 subcompact cars.

  9. Determination of traffic intensity from camera images using image processing and pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Mehrübeoğlu, Mehrübe; McLauchlan, Lifford

    2006-02-01

    The goal of this project was to detect the intensity of traffic on a road at different times of the day during daytime. Although the work presented utilized images from a section of a highway, the results of this project are intended for making decisions on the type of intervention necessary on any given road at different times for traffic control, such as installation of traffic signals, duration of red, green and yellow lights at intersections, and assignment of traffic control officers near school zones or other relevant locations. In this project, directional patterns are used to detect and count the number of cars in traffic images over a fixed area of the road to determine local traffic intensity. Directional patterns are chosen because they are simple and common to almost all moving vehicles. Perspective vision effects specific to each camera orientation has to be considered, as they affect the size and direction of patterns to be recognized. In this work, a simple and fast algorithm has been developed based on horizontal directional pattern matching and perspective vision adjustment. The results of the algorithm under various conditions are presented and compared in this paper. Using the developed algorithm, the traffic intensity can accurately be determined on clear days with average sized cars. The accuracy is reduced on rainy days when the camera lens contains raindrops, when there are very long vehicles, such as trucks or tankers, in the view, and when there is very low light around dusk or dawn.

  10. Adaptive on-line calibration for around-view monitoring system using between-camera homography estimation

    NASA Astrophysics Data System (ADS)

    Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho

    2018-01-01

    The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.

  11. 14. CAR DUMP BUILDING, SOUTHWEST CORNER, VIEW SHOWING CABLE CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. CAR DUMP BUILDING, SOUTHWEST CORNER, VIEW SHOWING CABLE CAR #20 BENEATH COAL CHUTES - Pennsylvania Railroad, Canton Coal Pier, Clinton Street at Keith Avenue (Canton area), Baltimore, Independent City, MD

  12. DETAIL VIEW OF BATCH CAR, BUILT BY ATLAS CAR & ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF BATCH CAR, BUILT BY ATLAS CAR & MANUFACTURING COMPANY. BATCH STORAGE SILOS IN BACKGROUND - Chambers Window Glass Company, Batch Plant, North of Drey (Nineteenth) Street, West of Constitution Boulevard, Arnold, Westmoreland County, PA

  13. A method of real-time detection for distant moving obstacles by monocular vision

    NASA Astrophysics Data System (ADS)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  14. 1. GENERAL VIEW OF TIPPLE LOOKING NORTHWEST, SHOWING LARRY CARS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. GENERAL VIEW OF TIPPLE LOOKING NORTHWEST, SHOWING LARRY CARS BELOW HOPPER BIN. GRAY FITZSIMONS, HAER HISTORIAN, IS STANDING ON PLATFORM NEXT TO LARRY CARS - Lucernemines Coke Works, Larry Car Tipple, East of Lucerne, Lucerne Mines, Indiana County, PA

  15. General view along tracks to Locomotive Shop with Car Shop ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    General view along tracks to Locomotive Shop with Car Shop on right. Note locomotive tires leaning against Car Shop wall. View from west - East Broad Top Railroad & Coal Company, State Route 994, West of U.S. Route 522, Rockhill Furnace, Huntingdon County, PA

  16. Car-Crash Experiment for the Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Ball, Penny L.; And Others

    1974-01-01

    Describes an interesting, inexpensive, and highly motivating experiment to study uniform and accelerated motion by measuring the position of a car as it crashes into a rigid wall. Data are obtained from a sequence of pictures made by a high speed camera. (Author/SLH)

  17. Interior view; Street Car Waiting House North Philadelphia Station, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view; Street Car Waiting House - North Philadelphia Station, Street Car Waiting House, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  18. Advanced Infant Car Seat Would Increase Highway Safety

    NASA Technical Reports Server (NTRS)

    Dabney, Richard; Elrod, Susan

    2004-01-01

    An advanced infant car seat has been proposed to increase highway safety by reducing the incidence of crying, fussy behavior, and other child-related distractions that divert an adult driver s attention from driving. In addition to a conventional infant car seat with safety restraints, the proposed advanced infant car seat would include a number of components and subsystems that would function together as a comprehensive infant-care system that would keep its occupant safe, comfortable, and entertained, and would enable the driver to monitor the baby without having to either stop the car or turn around to face the infant during driving. The system would include a vibrator with bulb switch to operate; the switch would double as a squeeze toy that would make its own specific sound. A music subsystem would include loudspeakers built into the seat plus digital and analog circuitry that would utilize plug-in memory modules to synthesize music or a variety of other sounds. The music subsystem would include a built-in sound generator that could synthesize white noise or a human heartbeat to calm the baby to sleep. A second bulb switch could be used to control the music subsystem and would double as a squeeze toy that would make a distinct sound. An anti-noise sound-suppression system would isolate the baby from potentially disturbing ambient external noises. This subsystem would include small microphones, placed near the baby s ears, to detect ambient noise. The outputs of the microphone would be amplified and fed to the loudspeakers at appropriate amplitude and in a phase opposite that of the detected ambient noise, such that the net ambient sound arriving at the baby s ears would be almost completely cancelled. A video-camera subsystem would enable the driver to monitor the baby visually while continuing to face forward. One or more portable miniature video cameras could be embedded in the side of the infant car seat (see figure) or in a flip-down handle. The outputs of the video cameras would be transmitted by radio or infrared to a portable, miniature receiver/video monitor unit that would be attached to the dashboard of the car. The video-camera subsystem can also be used within transmission/reception range when the seat was removed from the car. The system would include a biotelemetric and tracking subsystem, which would include a Global Positioning System receiver for measuring its location. This subsystem would transmit the location of the infant car seat (even if the seat were not in a car) along with such biometric data as the baby s heart rate, perspiration rate, urinary status, temperature, and rate of breathing. Upon detecting any anomalies in the biometric data, this subsystem would send a warning to a paging device installed in the car or carried by the driver, so that the driver could pull the car off the road to attend to the baby. A motion detector in this subsystem would send a warning if the infant car seat were to be moved or otherwise disturbed unexpectedly while the infant was seated in it: this warning function, in combination with the position- tracking function, could help in finding a baby who had been kidnapped with the seat. Removable rechargeable batteries would enable uninterrupted functioning of all parts of the system while transporting the baby to and from the car. The batteries could be recharged via the cigarette-lighter outlet in the car or by use of an external AC-powered charger.

  19. 3. GENERAL VIEW OF PASSENGER CAR SHOP; RAILROAD TRACKS IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. GENERAL VIEW OF PASSENGER CAR SHOP; RAILROAD TRACKS IN FOREGROUND - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD

  20. North view; Street Car Waiting House, south (front) elevation ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    North view; Street Car Waiting House, south (front) elevation - North Philadelphia Station, Street Car Waiting House, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  1. West view; Street Car Waiting House, east elevation North ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    West view; Street Car Waiting House, east elevation - North Philadelphia Station, Street Car Waiting House, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  2. 14. AERIAL VIEW OF ENGINE DISPLAY INSIDE PASSENGER CAR SHOP ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. AERIAL VIEW OF ENGINE DISPLAY INSIDE PASSENGER CAR SHOP (NOW A TRANSPORTATION MUSEUM) - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD

  3. 25. INTERIOR OF ROTARY CAR DUMPER BUILDING WITH VIEW OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    25. INTERIOR OF ROTARY CAR DUMPER BUILDING WITH VIEW OF ROTARY CAR DUMPER LOOKING NORTHEAST. (Jet Lowe) - U.S. Steel Duquesne Works, Blast Furnace Plant, Along Monongahela River, Duquesne, Allegheny County, PA

  4. 11. VIEW OF CIRCULAR CAR SHOP OVER TOPS OF BOX ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. VIEW OF CIRCULAR CAR SHOP OVER TOPS OF BOX CARS LOOKING NORTHEAST. - Baltimore & Ohio Railroad, Mount Clare Shops, South side of Pratt Street between Carey & Poppleton Streets, Baltimore, Independent City, MD

  5. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    DTIC Science & Technology

    2015-04-01

    Accordion Car backfire Crushing a metal can Aerosol can Car crash Crushing a tin can Alarm clock Car ignition Crushing egg shells Alloette...top Coffee perking Eggs beaten in a bowl with a whisk Bowling Coffee pot whistling Elastic (snap) Bread cutting Coin dropping Electric...Bus Combination lock Female speaking Bus air break Cooking with fat Ferry Bus stop and go Cuckoo clock Ferry horn Camera Corduroy

  6. Using Thermal Radiation in Detection of Negative Obstacles

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2009-01-01

    A method of automated detection of negative obstacles (potholes, ditches, and the like) ahead of ground vehicles at night involves processing of imagery from thermal-infrared cameras aimed at the terrain ahead of the vehicles. The method is being developed as part of an overall obstacle-avoidance scheme for autonomous and semi-autonomous offroad robotic vehicles. The method could also be applied to help human drivers of cars and trucks avoid negative obstacles -- a development that may entail only modest additional cost inasmuch as some commercially available passenger cars are already equipped with infrared cameras as aids for nighttime operation.

  7. The effects of spatially displaced visual feedback on remote manipulator performance

    NASA Technical Reports Server (NTRS)

    Smith, Randy L.; Stuart, Mark A.

    1989-01-01

    The effects of spatially displaced visual feedback on the operation of a camera viewed remote manipulation task are analyzed. A remote manipulation task is performed by operators exposed to the following different viewing conditions: direct view of the work site; normal camera view; reversed camera view; inverted/reversed camera view; and inverted camera view. The task completion performance times are statistically analyzed with a repeated measures analysis of variance, and a Newman-Keuls pairwise comparison test is administered to the data. The reversed camera view is ranked third out of four camera viewing conditions, while the normal viewing condition is found significantly slower than the direct viewing condition. It is shown that generalization to remote manipulation applications based upon the results of direct manipulation studies are quite useful, but they should be made cautiously.

  8. 23. POWELLTYPE CAR THROUGH GRIP REMOVAL DOOR: View looking through ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. POWELL-TYPE CAR THROUGH GRIP REMOVAL DOOR: View looking through grip removal door into interior of a Powell-type cable car. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  9. SouthWest view, Street Car Waiting House, north and east elevations ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    South-West view, Street Car Waiting House, north and east elevations - North Philadelphia Station, Street Car Waiting House, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  10. Spherical Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Developed largely through a Small Business Innovation Research contract through Langley Research Center, Interactive Picture Corporation's IPIX technology provides spherical photography, a panoramic 360-degrees. NASA found the technology appropriate for use in guiding space robots, in the space shuttle and space station programs, as well as research in cryogenic wind tunnels and for remote docking of spacecraft. Images of any location are captured in their entirety in a 360-degree immersive digital representation. The viewer can navigate to any desired direction within the image. Several car manufacturers already use IPIX to give viewers a look at their latest line-up of automobiles. Another application is for non-invasive surgeries. By using OmniScope, surgeons can look more closely at various parts of an organ with medical viewing instruments now in use. Potential applications of IPIX technology include viewing of homes for sale, hotel accommodations, museum sites, news events, and sports stadiums.

  11. Driver face recognition as a security and safety feature

    NASA Astrophysics Data System (ADS)

    Vetter, Volker; Giefing, Gerd-Juergen; Mai, Rudolf; Weisser, Hubert

    1995-09-01

    We present a driver face recognition system for comfortable access control and individual settings of automobiles. The primary goals are the prevention of car thefts and heavy accidents caused by unauthorized use (joy-riders), as well as the increase of safety through optimal settings, e.g. of the mirrors and the seat position. The person sitting on the driver's seat is observed automatically by a small video camera in the dashboard. All he has to do is to behave cooperatively, i.e. to look into the camera. A classification system validates his access. Only after a positive identification, the car can be used and the driver-specific environment (e.g. seat position, mirrors, etc.) may be set up to ensure the driver's comfort and safety. The driver identification system has been integrated in a Volkswagen research car. Recognition results are presented.

  12. South view; Interior view of covered ramp to Street Car ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    South view; Interior view of covered ramp to Street Car Waiting House from Station Building - North Philadelphia Station, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  13. 1. GENERAL VIEW OF SAND HOUSE, TANK AND CAR SHELTER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. GENERAL VIEW OF SAND HOUSE, TANK AND CAR SHELTER LOOKING NORTHWEST. MINE CARS IN FOREGROUND. - Eureka No. 40, Sand House & Tank, East of State Route 56, North of Little Paint Creek, Scalp Level, Cambria County, PA

  14. 27. BOILER HOUSE, GENERAL VIEW LOOKING SOUTH, PAST COAL CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    27. BOILER HOUSE, GENERAL VIEW LOOKING SOUTH, PAST COAL CAR No. 9 TOWARD COAL CARS No. 11 & 8 - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA

  15. NorthEast view; Street Car Waiting House, south (front) and west ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    North-East view; Street Car Waiting House, south (front) and west elevations - North Philadelphia Station, Street Car Waiting House, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  16. 20. TURNTABLE WITH CABLE CAR BAY & TAYLOR: View ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    20. TURNTABLE WITH CABLE CAR - BAY & TAYLOR: View to northwest of the Bay and Taylor turntable. The gripman and conductor are turning the car around. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  17. 10. VIEW OF BOILER SHOP FROM TOP OF BOX CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. VIEW OF BOILER SHOP FROM TOP OF BOX CAR WITH CIRCULAR CAR SHOP IN BACKGROUND LOOKING NORTHEAST. - Baltimore & Ohio Railroad, Mount Clare Shops, South side of Pratt Street between Carey & Poppleton Streets, Baltimore, Independent City, MD

  18. 2. VIEW OF TIPPLE LOOKING WEST FROM TOP OF OVENS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VIEW OF TIPPLE LOOKING WEST FROM TOP OF OVENS, SHOWING ROBINS CAR SHAKER (ON RIGHT) WHERE COAL WAS UNLOADED AND CONVEYED TO LARRY CAR TIPPLE - Lucernemines Coke Works, Larry Car Tipple, East of Lucerne, Lucerne Mines, Indiana County, PA

  19. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  20. Surveillance Using Multiple Unmanned Aerial Vehicles

    DTIC Science & Technology

    2009-03-01

    BATCAM wingspan was 21” vs Jodeh’s 9.1 ft, the BATCAM’s propulsion was electric vs. Jodeh’s gas engine, cameras were body fixed vs. gimballed, and...3.1: BATCAM Camera FOV Angles Angle Front Camera Side Camera Depression Angle 49◦ 39◦ horizontal FOV 48◦ 48◦ vertical FOV 40◦ 40◦ by a quiet electric ...motor. The batteries can be recharged with a car cigarette lighter in less than an hour. Assembly of the wing airframe takes less than a minute, and

  1. First stereo video dataset with ground truth for remote car pose estimation using satellite markers

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Pierini, Marco

    2018-04-01

    Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).

  2. 6. INTERIOR VIEW, LOOKING SOUTH, WITH CAR 'DETRUCKED AND TURNED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. INTERIOR VIEW, LOOKING SOUTH, WITH CAR 'DETRUCKED AND TURNED UPSIDE DOWN' FOR WELDING UNDERSIDE OF THE CAR AND INSIDE OF THE ROOF. IN FOREGROUND ARE THE WHEELS REMOVED TO FACILITATE ASSEMBLY. - Pullman Standard Company Plant, Fabrication Assembly Shop, 401 North Twenty-fourth Street, Bessemer, Jefferson County, AL

  3. 46. INTERIOR VIEW TO SOUTH ON SECOND FLOOR: Interior view ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    46. INTERIOR VIEW TO SOUTH ON SECOND FLOOR: Interior view looking south along the east wall on the second floor of the powerhouse and car barn. Note the cable car truck in the foreground. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  4. VIEW OF TRANSFER CAR (BATTERYELECTRIC POWERED) FROM BILLET YARD POSITIONED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF TRANSFER CAR (BATTERY-ELECTRIC POWERED) FROM BILLET YARD POSITIONED FOR LOADING BILLETS INTO FURNACE; HEATER (BEHIND SCREEN IN CENTER) MOVES THE TRANSFER CAR INTO POSITION. BILLETS FROM THE TRANSFER CAR ARE PLACED ON HAND-OPERATED TURNTABLE. THE FURNACE IS NATURAL-GAS FIRED, WITH BILLETS HEATED AT NOT MORE THAN 2400 DEGREES FAHRENHEIT. - Cambria Iron Company, Gautier Works, 12" Mill, Clinton Street & Little Conemaugh River, Johnstown, Cambria County, PA

  5. Temporal Dynamics of Gully Evolution in a Small, Ephemeral Channel in a Semiarid Watershed

    NASA Astrophysics Data System (ADS)

    Nichols, Mary; Nearing, Mark

    2015-04-01

    Incised channels that terminate at a vertical-wall gully heads are common features in semiarid watersheds. The geomorphic evolution of such channels is often dominated by migration of the headwall. The evolution of a headwall in a low order channel on the USDA-ARS Walnut Gulch Experimental Watershed (WGEW) in southeastern Arizona has been monitored since 2004, and since 2012, time-lapse photography has been employed to observe the temporal dynamics at high resolution. A Canon A1300 off the shelf point and shoot digital camera mounted inside a weatherproof Pelican case has been taking 15 mp photographs since 2012. The camera power supply was modified to run from a 12V car battery that was charged with a 25 Watt solar panel through a solar controller. During the runoff season from July through September, images were collected every 30 seconds and the time step was increase to 30 minutes during winter months. The field of view covers the headcut and the immediate surroundings. Runoff events were distinct flash floods in response to high intensity rain. The temporal sequencing of the dominant processes of erosion including mass wasting, plunge pool erosion, and piping are described. In addition, we present a description of the time-lapse camera system with suggestions for future improvements.

  6. 45. INTERIOR VIEW TO SOUTHWEST ON SECOND FLOOR: Interior view ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    45. INTERIOR VIEW TO SOUTHWEST ON SECOND FLOOR: Interior view towards southwest on second floor of main portion of the powerhouse and car barn. This space is used for repair and storage of cable cars. Note wooden trussed roof. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  7. Laser-induced breakdown spectroscopy for the remote detection of explosives at level of fingerprints

    NASA Astrophysics Data System (ADS)

    Almaviva, S.; Palucci, A.; Lazic, V.; Menicucci, I.; Nuvoli, M.; Pistilli, M.; De Dominicis, L.

    2016-04-01

    We report the results of the application of Laser-Induced Breakdown Spectroscopy (LIBS) for the detection of some common military explosives and theirs precursors deposited on white varnished car's external and black car's internal or external plastic. The residues were deposited by an artificial silicon finger, to simulate material manipulation by terrorists when preparing a car bomb, leaving traces of explosives on the parts of a car. LIBS spectra were acquired by using a first prototype laboratory stand-off device, developed in the framework of the EU FP7 313077 project EDEN (End-user driven DEmo for CBRNe). The system operates at working distances 8-30 m and collects the LIBS in the spectral range 240-840 nm. In this configuration, the target was moved precisely in X-Y direction to simulate the scanning system, to be implemented successively. The system is equipped with two colour cameras, one for wide scene view and another for imaging with a very high magnification, capable to discern fingerprints on a target. The spectral features of each examined substance were identified and compared to those belonging to the substrate and the surrounding air, and those belonging to possible common interferents. These spectral differences are discussed and interpreted. The obtained results show that the detection and discrimination of nitro-based compounds like RDX, PETN, ammonium nitrate (AN), and urea nitrate (UN) from organic interfering substances like diesel, greasy lubricants, greasy adhesives or oils in fingerprint concentration, at stand-off distance of some meters or tenths of meters is feasible.

  8. 9. FIRST FLOOR CAR BARN SPACE. VIEW TO NORTHWEST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. FIRST FLOOR CAR BARN SPACE. VIEW TO NORTHWEST. - Commercial & Industrial Buildings, Key City Electric Street Railroad, Powerhouse & Storage Barn, Eighth & Washington Streets, Dubuque, Dubuque County, IA

  9. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. Assimilating Eulerian and Lagrangian data in traffic-flow models

    NASA Astrophysics Data System (ADS)

    Xia, Chao; Cochrane, Courtney; DeGuire, Joseph; Fan, Gaoyang; Holmes, Emma; McGuirl, Melissa; Murphy, Patrick; Palmer, Jenna; Carter, Paul; Slivinski, Laura; Sandstede, Björn

    2017-05-01

    Data assimilation of traffic flow remains a challenging problem. One difficulty is that data come from different sources ranging from stationary sensors and camera data to GPS and cell phone data from moving cars. Sensors and cameras give information about traffic density, while GPS data provide information about the positions and velocities of individual cars. Previous methods for assimilating Lagrangian data collected from individual cars relied on specific properties of the underlying computational model or its reformulation in Lagrangian coordinates. These approaches make it hard to assimilate both Eulerian density and Lagrangian positional data simultaneously. In this paper, we propose an alternative approach that allows us to assimilate both Eulerian and Lagrangian data. We show that the proposed algorithm is accurate and works well in different traffic scenarios and regardless of whether ensemble Kalman or particle filters are used. We also show that the algorithm is capable of estimating parameters and assimilating real traffic observations and synthetic observations obtained from microscopic models.

  11. 3. CONTEXT VIEW OF GARAGE COMPLEX AT SOUTHEAST END OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. CONTEXT VIEW OF GARAGE COMPLEX AT SOUTHEAST END OF OPERATOR' CAMP SHOWING MECHANIC'S GARAGE NO. 48 AT LEFT, 4-CAR GARAGE AT CENTER, AND 3-CAR GARAGE AT RIGHT. VIEW TO NORTH-NORTHWEST - Holter Hydroelectric Facility, End of Holter Dam Road, Wolf Creek, Lewis and Clark County, MT

  12. 1. CONTEXT VIEW OF GARAGE COMPLEX AT SOUTHEAST END OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. CONTEXT VIEW OF GARAGE COMPLEX AT SOUTHEAST END OF OPERATOR'S CAMP SHOWING MECHANIC'S GARAGE AT RIGHT, 4-CAR GARAGE AT CENTER, AND 3-CAR GARAGE AT LEFT. VIEW TO SOUTHEAST. - Holter Hydroelectric Facility, End of Holter Dam Road, Wolf Creek, Lewis and Clark County, MT

  13. 2. CONTEXT VIEW OF GARAGE COMPLEX AT SOUTHEAST END OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. CONTEXT VIEW OF GARAGE COMPLEX AT SOUTHEAST END OF OPERATOR'S CAMP SHOWING MECHANIC'S GARAGE AT RIGHT, 4-CAR GARAGE AT CENTER, AND 3-CAR GARAGE AT LEFT. VIEW TO SOUTHWEST. - Holter Hydroelectric Facility, End of Holter Dam Road, Wolf Creek, Lewis and Clark County, MT

  14. 6. GENERAL VIEW OF CUPOLA AND SECOND FLOOR OF PASSENGER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. GENERAL VIEW OF CUPOLA AND SECOND FLOOR OF PASSENGER CAR SHOP - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD

  15. West view, general; Station Building, covered ramp, and Street Car ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    West view, general; Station Building, covered ramp, and Street Car Waiting House - North Philadelphia Station, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  16. 44. SECOND FLOOR 'ANNEX' INTERIOR VIEW TO SOUTHWEST: Interior ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    44. SECOND FLOOR 'ANNEX' - INTERIOR VIEW TO SOUTHWEST: Interior view towards southwest on second floor of the powerhouse 'annex.' Note the steel column and beam construction and the old shunt car formerly used to move cable cars around the yard. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  17. The time resolved SBS and SRS research in heavy water and its application in CARS

    NASA Astrophysics Data System (ADS)

    Liu, Jinbo; Gai, Baodong; Yuan, Hong; Sun, Jianfeng; Zhou, Xin; Liu, Di; Xia, Xusheng; Wang, Pengyuan; Hu, Shu; Chen, Ying; Guo, Jingwei; Jin, Yuqi; Sang, Fengting

    2018-05-01

    We present the time-resolved character of stimulated Brillouin scattering (SBS) and backward stimulated Raman scattering (BSRS) in heavy water and its application in Coherent Anti-Stokes Raman Scattering (CARS) technique. A nanosecond laser from a frequency-doubled Nd: YAG laser is introduced into a heavy water cell, to generate SBS and BSRS beams. The SBS and BSRS beams are collinear, and their time resolved characters are studied by a streak camera, experiment show that they are ideal source for an alignment-free CARS system, and the time resolved property of SBS and BSRS beams could affect the CARS efficiency significantly. By inserting a Dye cuvette to the collinear beams, the time-overlapping of SBS and BSRS could be improved, and finally the CARS efficiency is increased, even though the SBS energy is decreased. Possible methods to improve the efficiency of this CARS system are discussed too.

  18. 3. PERSPECTIVE VIEW OF THE NORTH FACADE AND WEST SIDE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. PERSPECTIVE VIEW OF THE NORTH FACADE AND WEST SIDE OF THE SOUTHERN-MOST CAR BARN AT CENTRAL AVENUE AND BOND STREET - Johnstown Passenger Railway Company, Car Barns, 726 Central Avenue, Johnstown, Cambria County, PA

  19. 24. PHOTOCOPY OF CA. 1936 VIEW OF AUBURN MOTOR CO. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. PHOTOCOPY OF CA. 1936 VIEW OF AUBURN MOTOR CO. OFFICE BUILDING SHOWING AN 'OFFICIAL CAR' TOWING A RACING CAR ON SMALL TRAILER - Lexington Motor Company, Eighteenth Street & Columbia Avenue, Connersville, Fayette County, IN

  20. 2. SOUTHEAST VIEW OF TRIPPER CAR ON THE FLUX STORAGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. SOUTHEAST VIEW OF TRIPPER CAR ON THE FLUX STORAGE FLOOR OF THE FURNACE AISLE IN THE BOP SHOP. - U.S. Steel Duquesne Works, Basic Oxygen Steelmaking Plant, Along Monongahela River, Duquesne, Allegheny County, PA

  1. West view, general; Station Building, covered ramp, and Street Car ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    West view, general; Station Building, covered ramp, and Street Car Waiting House, right to left - North Philadelphia Station, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  2. 2. VIEW SOUTH, INCLINE PLANE CAR, INCLINE PLANE TRACK, UPPER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VIEW SOUTH, INCLINE PLANE CAR, INCLINE PLANE TRACK, UPPER STATION. - Monongahela Incline Plane, Connecting North side of Grandview Avenue at Wyoming Street with West Carson Street near Smithfield Street, Pittsburgh, Allegheny County, PA

  3. Principal axis-based correspondence between multiple cameras for people tracking.

    PubMed

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  4. 2. GENERAL VIEW LOOKING NORTH, SHOWING THE SOUTHERN FACADE OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. GENERAL VIEW LOOKING NORTH, SHOWING THE SOUTHERN FACADE OF THE NORTHERN-MOST CAR BARN (CONSTRUCTED 1893) AT CENTRAL AVENUE AND BOND STREET - Johnstown Passenger Railway Company, Car Barns, 726 Central Avenue, Johnstown, Cambria County, PA

  5. 3. EXTERIOR VIEW TO THE NORTHEAST OF A RAILROAD CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. EXTERIOR VIEW TO THE NORTHEAST OF A RAILROAD CAR ON THE TRACKS AND THE PARTS OF AN ENGINE STAND. - Nevada Test Site, Pluto Facility, Area 26, Wahmonie Flats, Cane Spring Road, Mercury, Nye County, NV

  6. 32. DETAIL VIEW OF CAMERA PIT SOUTH OF LAUNCH PAD ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. DETAIL VIEW OF CAMERA PIT SOUTH OF LAUNCH PAD WITH CAMERA AIMED AT LAUNCH DECK; VIEW TO NORTHEAST. - Cape Canaveral Air Station, Launch Complex 17, Facility 28402, East end of Lighthouse Road, Cape Canaveral, Brevard County, FL

  7. Design of intelligent vehicle control system based on single chip microcomputer

    NASA Astrophysics Data System (ADS)

    Zhang, Congwei

    2018-06-01

    The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.

  8. 14. VIEW TO WESTSOUTHWEST ACROSS CAR TEST TRACT SITE TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. VIEW TO WEST-SOUTHWEST ACROSS CAR TEST TRACT SITE TO NORTH END OF ASSEMBLY PLANT, SHOWING NORTH END ADDITIONS. - Ford Motor Company Long Beach Assembly Plant, Assembly Building, 700 Henry Ford Avenue, Long Beach, Los Angeles County, CA

  9. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  10. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  11. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  12. Two-Camera Acquisition and Tracking of a Flying Target

    NASA Technical Reports Server (NTRS)

    Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter

    2008-01-01

    A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.

  13. Using Google Streetview Panoramic Imagery for Geoscience Education

    NASA Astrophysics Data System (ADS)

    De Paor, D. G.; Dordevic, M. M.

    2014-12-01

    Google Streetview is a feature of Google Maps and Google Earth that allows viewers to switch from map or satellite view to 360° panoramic imagery recorded close to the ground. Most panoramas are recorded by Google engineers using special cameras mounted on the roofs of cars. Bicycles, snowmobiles, and boats have also been used and sometimes the camera has been mounted on a backpack for off-road use by hikers and skiers or attached to scuba-diving gear for "Underwater Streetview (sic)." Streetview panoramas are linked together so that the viewer can change viewpoint by clicking forward and reverse buttons. They therefore create a 4-D touring effect. As part of the GEODE project ("Google Earth for Onsite and Distance Education"), we are experimenting with the use of Streetview imagery for geoscience education. Our web-based test application allows instructors to select locations for students to study. Students are presented with a set of questions or tasks that they must address by studying the panoramic imagery. Questions include identification of rock types, structures such as faults, and general geological setting. The student view is locked into Streetview mode until they submit their answers, whereupon the map and satellite views become available, allowing students to zoom out and verify their location on Earth. Student learning is scaffolded by automatic computerized feedback. There are lots of existing Streetview panoramas with rich geological content. Additionally, instructors and members of the general public can create panoramas, including 360° Photo Spheres, by stitching images taken with their mobiles devices and submitting them to Google for evaluation and hosting. A multi-thousand-dollar, multi-directional camera and mount can be purchased from DIY-streetview.com. This allows power users to generate their own high-resolution panoramas. A cheaper, 360° video camera is soon to be released according to geonaute.com. Thus there are opportunities for geoscience educators both to use existing Streetview imagery and to generate new imagery for specific locations of geological interest. The GEODE team includes the authors and: H. Almquist, C. Bentley, S. Burgin, C. Cervato, G. Cooper, P. Karabinos, T. Pavlis, J. Piatek, B. Richards, J. Ryan, R. Schott, K. St. John, B. Tewksbury, and S. Whitmeyer.

  14. Depth Perception In Remote Stereoscopic Viewing Systems

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  15. 52. View of sitdown cable car, cable way, and stream ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    52. View of sit-down cable car, cable way, and stream gaging station, looking southeast. Photo by Robin Lee Tedder, Puget Power, 1989. - Puget Sound Power & Light Company, White River Hydroelectric Project, 600 North River Avenue, Dieringer, Pierce County, WA

  16. 51. View of sitdown cable car and cable way for ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    51. View of sit-down cable car and cable way for stream gaging, looking west. Photo by Robin Lee Tedder, Puget Power, 1989. - Puget Sound Power & Light Company, White River Hydroelectric Project, 600 North River Avenue, Dieringer, Pierce County, WA

  17. Relationship between Main Civilian Occupation and Army General Classification Test Standard Score. Part 2

    DTIC Science & Technology

    1945-03-07

    Picture (285) ....... •"■*’ Cameraman, Motion Picture (043) 115 Canvas Cover Renairuan (OhU) ■ * Car Carpenter, Railway (046) i<" Car Mechanic...Film Editor, Motion Picture (l3l) .,,,.,♦ * 15 Filter Operator, ^ tor Supply (O83). # 10 Fingerprinter (307) ’. . * 30 Fire Fighter (383) ,. 128...Mechanic (322) .... Registered Nurse (225) ....... Repairman, Camera (042) Repairman, Canvas Cover (044) . . . Repairman, Central. Of fice (095

  18. Protective laser beam viewing device

    DOEpatents

    Neil, George R.; Jordan, Kevin Carl

    2012-12-18

    A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.

  19. 5. VIEW LOOKING DOWN ON TOP OF ELEVATOR CAR FROM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. VIEW LOOKING DOWN ON TOP OF ELEVATOR CAR FROM THIRD FLOOR, SHOWING CROSSHEAD AND BROKEN ROPE SAFETY STOP MECHANISM INSIDE; GUIDES AT LOWER LEFT AND UPPER RIGHT; OPERATING ROPE AT LEFT - 72 Marlborough Street, Residential Hydraulic Elevator, Boston, Suffolk County, MA

  20. VIEW OF SEMIDISMANTLED HOPPER CAR ON THE ORE TRESTLE FACING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF SEMI-DISMANTLED HOPPER CAR ON THE ORE TRESTLE FACING NORTHWEST WITH OPEN HEARTH BUILDING IN THE BACKGROUND AND THE INGOT MOLD CONDITIONING BUILDING TO THE RIGHT. - Pittsburgh Steel Company, Monessen Works, Open Hearth Plant, Donner Avenue, Monessen, Westmoreland County, PA

  1. 72. DETAIL VIEW OF TRAM SUSPENSION CABLE OILING CAR. NOTE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    72. DETAIL VIEW OF TRAM SUSPENSION CABLE OILING CAR. NOTE CIRCULAR TRACTION CABLE CLAMP IN CENTER AND OIL FEED TO CABLE BETWEEN TWO RIGHT-HAND WHEELS. OIL REDUCED FRICTION AND RUST. - Shenandoah-Dives Mill, 135 County Road 2, Silverton, San Juan County, CO

  2. VIEW OF TURNTABLE, WITH CAR MACHINE SHOP AND PLANING MILL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF TURNTABLE, WITH CAR MACHINE SHOP AND PLANING MILL IN THE BACKGROUND AND ERECTING/MACHINE SHOP AT RIGHT, LOOKING EAST. ATSF 5021 2-10-4 STEAM LOCOMOTIVE IS ON TURNTABLE. - Southern Pacific, Sacramento Shops, 111 I Street, Sacramento, Sacramento County, CA

  3. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  4. Hyperspectral Image-Based Night-Time Vehicle Light Detection Using Spectral Normalization and Distance Mapper for Intelligent Headlight Control.

    PubMed

    Kim, Heekang; Kwon, Soon; Kim, Sungho

    2016-07-08

    This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).

  5. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation

    NASA Astrophysics Data System (ADS)

    Inamoto, Naho; Saito, Hideo

    2003-06-01

    This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..

  6. 36. BOILER HOUSE, GENERAL VIEW LOOKING TOWARD COAL CAR No. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    36. BOILER HOUSE, GENERAL VIEW LOOKING TOWARD COAL CAR No. 6 (NOTE: COAL DISTRIBUTER HOPPER & CONVEYOR THAT RUNS NORTH TO SOUTH BETWEEN TRACKS ON EAST TOWER SIDE) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA

  7. Web Camera Use of Mothers and Fathers When Viewing Their Hospitalized Neonate.

    PubMed

    Rhoads, Sarah J; Green, Angela; Gauss, C Heath; Mitchell, Anita; Pate, Barbara

    2015-12-01

    Mothers and fathers of neonates hospitalized in a neonatal intensive care unit (NICU) differ in their experiences related to NICU visitation. To describe the frequency and length of maternal and paternal viewing of their hospitalized neonates via a Web camera. A total of 219 mothers and 101 fathers used the Web camera that allows 24/7 NICU viewing from September 1, 2010, to December 31, 2012, which included 40 mother and father dyads. We conducted a review of the Web camera's Web site log-on records in this nonexperimental, descriptive study. Mothers and fathers had a significant difference in the mean number of log-ons to the Web camera system (P = .0293). Fathers virtually visited the NICU less often than mothers, but there was not a statistical difference between mothers and fathers in terms of the mean total number of minutes viewing the neonate (P = .0834) or in the maximum number of minutes of viewing in 1 session (P = .6924). Patterns of visitations over time were not measured. Web camera technology could be a potential intervention to aid fathers in visiting their neonates. Both parents should be offered virtual visits using the Web camera and oriented regarding how to use the Web camera. These findings are important to consider when installing Web cameras in a NICU. Future research should continue to explore Web camera use in NICUs.

  8. Estimation of canopy carotenoid content of winter wheat using multi-angle hyperspectral data

    NASA Astrophysics Data System (ADS)

    Kong, Weiping; Huang, Wenjiang; Liu, Jiangui; Chen, Pengfei; Qin, Qiming; Ye, Huichun; Peng, Dailiang; Dong, Yingying; Mortimer, A. Hugh

    2017-11-01

    Precise estimation of carotenoid (Car) content in crops, using remote sensing data, could be helpful for agricultural resources management. Conventional methods for Car content estimation were mostly based on reflectance data acquired from nadir direction. However, reflectance acquired at this direction is highly influenced by canopy structure and soil background reflectance. Off-nadir observation is less impacted, and multi-angle viewing data are proven to contain additional information rarely exploited for crop Car content estimation. The objective of this study was to explore the potential of multi-angle observation data for winter wheat canopy Car content estimation. Canopy spectral reflectance was measured from nadir as well as from a series of off-nadir directions during different growing stages of winter wheat, with concurrent canopy Car content measurements. Correlation analyses were performed between Car content and the original and continuum removed spectral reflectance. Spectral features and previously published indices were derived from data obtained at different viewing angles and were tested for Car content estimation. Results showed that spectral features and indices obtained from backscattering directions between 20° and 40° view zenith angle had a stronger correlation with Car content than that from the nadir direction, and the strongest correlation was observed from about 30° backscattering direction. Spectral absorption depth at 500 nm derived from spectral data obtained from 30° backscattering direction was found to reduce the difference induced by plant cultivars greatly. It was the most suitable for winter wheat canopy Car estimation, with a coefficient of determination 0.79 and a root mean square error of 19.03 mg/m2. This work indicates the importance of taking viewing geometry effect into account when using spectral features/indices and provides new insight in the application of multi-angle remote sensing for the estimation of crop physiology.

  9. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  10. Hyperspectral microscopic imaging by multiplex coherent anti-Stokes Raman scattering (CARS)

    NASA Astrophysics Data System (ADS)

    Khmaladze, Alexander; Jasensky, Joshua; Zhang, Chi; Han, Xiaofeng; Ding, Jun; Seeley, Emily; Liu, Xinran; Smith, Gary D.; Chen, Zhan

    2011-10-01

    Coherent anti-Stokes Raman scattering (CARS) microscopy is a powerful technique to image the chemical composition of complex samples in biophysics, biology and materials science. CARS is a four-wave mixing process. The application of a spectrally narrow pump beam and a spectrally wide Stokes beam excites multiple Raman transitions, which are probed by a probe beam. This generates a coherent directional CARS signal with several orders of magnitude higher intensity relative to spontaneous Raman scattering. Recent advances in the development of ultrafast lasers, as well as photonic crystal fibers (PCF), enable multiplex CARS. In this study, we employed two scanning imaging methods. In one, the detection is performed by a photo-multiplier tube (PMT) attached to the spectrometer. The acquisition of a series of images, while tuning the wavelengths between images, allows for subsequent reconstruction of spectra at each image point. The second method detects CARS spectrum in each point by a cooled coupled charged detector (CCD) camera. Coupled with point-by-point scanning, it allows for a hyperspectral microscopic imaging. We applied this CARS imaging system to study biological samples such as oocytes.

  11. Increasing of visibility on the pedestrian crossing by the additional lighting systems

    NASA Astrophysics Data System (ADS)

    Baleja, Richard; Bos, Petr; Novak, Tomas; Sokansky, Karel; Hanusek, Tomas

    2017-09-01

    Pedestrian crossings are critical places for road accidents between pedestrians and motor vehicles. For this reason, it is very important to increase attention when the pedestrian crossings are designed and it is necessary to take into account all factors that may contribute to higher safety. Additional lighting systems for pedestrian crossings are one of them and the lighting systems must fulfil the requirements for higher visibility from the point of view of car drivers from both directions. This paper describes the criteria for the suitable additional lighting system on pedestrian crossings. Generally, it means vertical illuminance on the pedestrian crossing from the driver’s view, horizontal illuminance on the crossing and horizontal illuminance both in front of and behind the crossing placed on the road and their acceptable ratios. The article also describes the choice of the colours of the light (correlated colour temperature) and its influence on visibility. As a part of the article, there are case designs of additional lighting systems for pedestrian crossings and measurements from realized additional lighting systems by luxmeters and luminance cameras and their evaluation.

  12. Analysis of calibration accuracy of cameras with different target sizes for large field of view

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan

    2018-03-01

    Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.

  13. Pedestrian-driver communication and decision strategies at marked crossings.

    PubMed

    Sucha, Matus; Dostal, Daniel; Risser, Ralf

    2017-05-01

    The aim of this work is to describe pedestrian-driver encounters, communication, and decision strategies at marked but unsignalised crossings in urban areas in the Czech Republic and the ways in which the parties involved experience and handle these encounters. A mixed-methods design was used, consisting of focus groups with pedestrians and drivers regarding their subjective views of the situations, on-site observations, camera recordings, speed measurements, the measurement of car and pedestrian densities, and brief on-site interviews with pedestrians. In close correspondence with the literature, our study revealed that the most relevant predictors of pedestrians' and drivers' behaviour at crossings were the densities of car traffic and pedestrian flows and car speed. The factors which influenced pedestrians' wait/go behaviour were: car speed, the distance of the car from the crossing, traffic density, whether there were cars approaching from both directions, various signs given by the driver (eye contact, waving a hand, flashing their lights), and the presence of other pedestrians. The factors influencing drivers' yield/go behaviour were: speed, traffic density, the number of pedestrians waiting to cross, and pedestrians being distracted. A great proportion of drivers (36%) failed to yield to pedestrians at marked crossings. The probability of conflict situations increased with cars travelling at a higher speed, higher traffic density, and pedestrians being distracted by a different activity while crossing. The findings of this study can add to the existing literature by helping to provide an understanding of the perception of encounter situations by the parties involved and the motives lying behind certain aspects of behaviour associated with these encounters. This seems necessary in order to develop suggestions for improvements. For instance, the infrastructure near pedestrian crossings should be designed in such a way as to take proper account of pedestrians' needs to feel safe and comfortable, as well as ensuring their objective safety. Thus, improvements should include measures aimed at reducing the speed of approaching vehicles (e.g. humps, speed cushions, elevated crossings, early yield bars, and narrow lanes), as this would enhance yielding by motor vehicles. Other measures that specifically rely on the subjective perception of different situations by the parties involved include the education and training of drivers, the aim of which is to promote their understanding and appreciation of pedestrians' needs and motives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  15. 31. REAR OF CAR BARN DURING RECONSTRUCTION: Photocopy of July ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. REAR OF CAR BARN DURING RECONSTRUCTION: Photocopy of July 1908 photograph showing west rear of powerhouse and car barn. View from the north. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  16. View from northeast of Car Shop showing east wall and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View from northeast of Car Shop showing east wall and north end; also a portion of (east facade) of Foundry building. Structure between houses the boilers and stationary steam engine - East Broad Top Railroad & Coal Company, State Route 994, West of U.S. Route 522, Rockhill Furnace, Huntingdon County, PA

  17. 6. VIEW OF INSIDE OF RAIL CAR CONTAINING GRAPHITE DELIVERED ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VIEW OF INSIDE OF RAIL CAR CONTAINING GRAPHITE DELIVERED TO BUILDING 444. THE GRAPHITE WAS FORMED INTO MOLDS AND CRUCIBLE FOR USE IN THE FOUNDRY. (1/12/54) - Rocky Flats Plant, Non-Nuclear Production Facility, South of Cottonwood Avenue, west of Seventh Avenue & east of Building 460, Golden, Jefferson County, CO

  18. 4. 4CAR GARAGE. WEST SIDE (RIGHT) SHOWING JUNCTURE WITH SOUTH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. 4-CAR GARAGE. WEST SIDE (RIGHT) SHOWING JUNCTURE WITH SOUTH WALL (LEFT) OF MECHANIC'S GARAGE (MT-94-F). VIEW TO NORTHEAST. - Holter Hydroelectric Facility, Four-Car Garage, End of Holter Dam Road, Wolf Creek, Lewis and Clark County, MT

  19. Travtek Evaluation Task C3: Camera Car Study

    DOT National Transportation Integrated Search

    1998-11-01

    A "biometric" technology is an automatic method for the identification, or identity verification, of an individual based on physiological or behavioral characteristics. The primary objective of the study summarized in this tech brief was to make reco...

  20. Automatic Parking of Self-Driving CAR Based on LIDAR

    NASA Astrophysics Data System (ADS)

    Lee, B.; Wei, Y.; Guo, I. Y.

    2017-09-01

    To overcome the deficiency of ultrasonic sensor and camera, this paper proposed a method of autonomous parking based on the self-driving car, using HDL-32E LiDAR. First the 3-D point cloud data was preprocessed. Then we calculated the minimum size of parking space according to the dynamic theories of vehicle. Second the rapidly-exploring random tree algorithm (RRT) algorithm was improved in two aspects based on the moving characteristic of autonomous car. And we calculated the parking path on the basis of the vehicle's dynamics and collision constraints. Besides, we used the fuzzy logic controller to control the brake and accelerator in order to realize the stably of speed. At last the experiments were conducted in an autonomous car, and the results show that the proposed automatic parking system is feasible and effective.

  1. The Identification and Modeling of Visual Cue Usage in Manual Control Task Experiments

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend; Trejo, Leonard J. (Technical Monitor)

    1999-01-01

    Many fields of endeavor require humans to conduct manual control tasks while viewing a perspective scene. Manual control refers to tasks in which continuous, or nearly continuous, control adjustments are required. Examples include flying an aircraft, driving a car, and riding a bicycle. Perspective scenes can arise through natural viewing of the world, simulation of a scene (as in flight simulators), or through imaging devices (such as the cameras on an unmanned aerospace vehicle). Designers frequently have some degree of control over the content and characteristics of a perspective scene; airport designers can choose runway markings, vehicle designers can influence the size and shape of windows, as well as the location of the pilot, and simulator database designers can choose scene complexity and content. Little theoretical framework exists to help designers determine the answers to questions related to perspective scene content. An empirical approach is most commonly used to determine optimum perspective scene configurations. The goal of the research effort described in this dissertation has been to provide a tool for modeling the characteristics of human operators conducting manual control tasks with perspective-scene viewing. This is done for the purpose of providing an algorithmic, as opposed to empirical, method for analyzing the effects of changing perspective scene content for closed-loop manual control tasks.

  2. The new car following model considering vehicle dynamics influence and numerical simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dihua; Liu, Hui; Zhang, Geng; Zhao, Min

    2015-12-01

    In this paper, the car following model is investigated by considering the vehicle dynamics in a cyber physical view. In fact, that driving is a typical cyber physical process which couples the cyber aspect of the vehicles' information and driving decision tightly with the dynamics and physics of the vehicles and traffic environment. However, the influence from the physical (vehicle) view was been ignored in the previous car following models. In order to describe the car following behavior more reasonably in real traffic, a new car following model by considering vehicle dynamics (for short, D-CFM) is proposed. In this paper, we take the full velocity difference (FVD) car following model as a case. The stability condition is given on the base of the control theory. The analytical method and numerical simulation results show that the new models can describe the evolution of traffic congestion. The simulations also show vehicles with a more actual acceleration of starting process than early models.

  3. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  4. Servo-controlled intravital microscope system

    NASA Technical Reports Server (NTRS)

    Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)

    1975-01-01

    A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.

  5. Rear-end vision-based collision detection system for motorcyclists

    NASA Astrophysics Data System (ADS)

    Muzammel, Muhammad; Yusoff, Mohd Zuki; Meriaudeau, Fabrice

    2017-05-01

    In many countries, the motorcyclist fatality rate is much higher than that of other vehicle drivers. Among many other factors, motorcycle rear-end collisions are also contributing to these biker fatalities. To increase the safety of motorcyclists and minimize their road fatalities, this paper introduces a vision-based rear-end collision detection system. The binary road detection scheme contributes significantly to reduce the negative false detections and helps to achieve reliable results even though shadows and different lane markers are present on the road. The methodology is based on Harris corner detection and Hough transform. To validate this methodology, two types of dataset are used: (1) self-recorded datasets (obtained by placing a camera at the rear end of a motorcycle) and (2) online datasets (recorded by placing a camera at the front of a car). This method achieved 95.1% accuracy for the self-recorded dataset and gives reliable results for the rear-end vehicle detections under different road scenarios. This technique also performs better for the online car datasets. The proposed technique's high detection accuracy using a monocular vision camera coupled with its low computational complexity makes it a suitable candidate for a motorbike rear-end collision detection system.

  6. Hyperspectral Image-Based Night-Time Vehicle Light Detection Using Spectral Normalization and Distance Mapper for Intelligent Headlight Control

    PubMed Central

    Kim, Heekang; Kwon, Soon; Kim, Sungho

    2016-01-01

    This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen). PMID:27399720

  7. 1-kHz two-dimensional coherent anti-Stokes Raman scattering (2D-CARS) for gas-phase thermometry.

    PubMed

    Miller, Joseph D; Slipchenko, Mikhail N; Mance, Jason G; Roy, Sukesh; Gord, James R

    2016-10-31

    Two-dimensional gas-phase coherent anti-Stokes Raman scattering (2D-CARS) thermometry is demonstrated at 1 kHz in a heated jet. A hybrid femtosecond/picosecond CARS configuration is used in a two-beam phase-matching arrangement with a 100-femtosecond pump/Stokes pulse and a 107-picosecond probe pulse. The femtosecond pulse is generated using a mode-locked oscillator and regenerative amplifier that is synchronized to a separate picosecond oscillator and burst-mode amplifier. The CARS signal is spectrally dispersed in a custom imaging spectrometer and detected using a high-speed camera with image intensifier. 1-kHz, single-shot planar measurements at room temperature exhibit error of 2.6% and shot-to-shot variations of 2.6%. The spatial variation in measured temperature is 9.4%. 2D-CARS temperature measurements are demonstrated in a heated O2 jet to capture the spatiotemporal evolution of the temperature field.

  8. Systems and methods for maintaining multiple objects within a camera field-of-view

    DOEpatents

    Gans, Nicholas R.; Dixon, Warren

    2016-03-15

    In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.

  9. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  10. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  11. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  12. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  13. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  14. Ultraviolet Viewing with a Television Camera.

    ERIC Educational Resources Information Center

    Eisner, Thomas; And Others

    1988-01-01

    Reports on a portable video color camera that is fully suited for seeing ultraviolet images and offers some expanded viewing possibilities. Discusses the basic technique, specialized viewing, and the instructional value of this system of viewing reflectance patterns of flowers and insects that are invisible to the unaided eye. (CW)

  15. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  16. Features of the Vision of Elderly Pedestrians when Crossing a Road.

    PubMed

    Matsui, Yasuhiro; Oikawa, Shoko; Aoki, Yoshio; Sekine, Michiaki; Mitobe, Kazutaka

    2014-11-01

    The present study clarifies the mechanism by which an accident occurs when an elderly pedestrian crosses a road in front of a car, focusing on features of the central and peripheral vision of elderly pedestrians who are judging when it is safe to cross the road. For the pedestrian's central visual field, we investigated the effect of age on the timing judgment using an actual car. The results for daytime conditions indicate that the elderly pedestrians tended to make later judgments of when they crossed the road from the right side of the driver's view at high car velocities. At night, for a car with its headlights on high beam, the average car-pedestrian distances of elderly pedestrians on the left side of the driver's view were significantly longer than those of young pedestrians at velocities of 20 and 40 km/h. The eyesight of the elderly pedestrians during the day did not affect the timing judgment of crossing a road. At night, for a car with its headlights on either high or low beam, the average car-pedestrian distances of elderly pedestrians having good eyesight were longer than those of elderly pedestrians having poor eyesight, for all car velocities. The color of the car body in the central visual field did not affect the timing judgment of elderly pedestrians crossing the road. Meanwhile, the car-body color in the elderly pedestrian's peripheral vision strongly affected the pedestrian's awareness of the car.

  17. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  18. Esthetic smile preferences and the orientation of the maxillary occlusal plane.

    PubMed

    Kattadiyil, Mathew T; Goodacre, Charles J; Naylor, W Patrick; Maveli, Thomas C

    2012-12-01

    The anteroposterior orientation of the maxillary occlusal plane has an important role in the creation, assessment, and perception of an esthetic smile. However, the effect of the angle at which this plane is visualized (the viewing angle) in a broad smile has not been quantified. The purpose of this study was to assess the esthetic preferences of dental professionals and nondentists by using 3 viewing angles of the anteroposterior orientation of the maxillary occlusal plane. After Institutional Review Board approval, standardized digital photographic images of the smiles of 100 participants were recorded by simultaneously triggering 3 cameras set at different viewing angles. The top camera was positioned 10 degrees above the occlusal plane (camera #1, Top view); the center camera was positioned at the level of the occlusal plane (camera #2, Center view); and the bottom camera was located 10 degrees below the occlusal plane (camera #3, Bottom view). Forty-two dental professionals and 31 nondentists (persons from the general population) independently evaluated digital images of each participant's smile captured from the Top view, Center view, and Bottom view. The 73 evaluators were asked individually through a questionnaire to rank the 3 photographic images of each patient as 'most pleasing,' 'somewhat pleasing,' or 'least pleasing,' with most pleasing being the most esthetic view and the preferred orientation of the occlusal plane. The resulting esthetic preferences were statistically analyzed by using the Friedman test. In addition, the participants were asked to rank their own images from the 3 viewing angles as 'most pleasing,' 'somewhat pleasing,' and 'least pleasing.' The 73 evaluators found statistically significant differences in the esthetic preferences between the Top and Bottom views and between the Center and Bottom views (P<.001). No significant differences were found between the Top and Center views. The Top position was marginally preferred over the Center, and both were significantly preferred over the Bottom position. When the participants evaluated their own smiles, a significantly greater number (P< .001) preferred the Top view over the Center or the Bottom views. No significant differences were found in preferences based on the demographics of the evaluators when comparing age, education, gender, profession, and race. The esthetic preference for the maxillary occlusal plane was influenced by the viewing angle with the higher (Top) and center views preferred by both dental and nondental evaluators. The participants themselves preferred the higher view of their smile significantly more often than the center or lower angle views (P<.001). Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  19. Fast-camera imaging on the W7-X stellarator

    NASA Astrophysics Data System (ADS)

    Ballinger, S. B.; Terry, J. L.; Baek, S. G.; Tang, K.; Grulke, O.

    2017-10-01

    Fast cameras recording in the visible range have been used to study filamentary (``blob'') edge turbulence in tokamak plasmas, revealing that emissive filaments aligned with the magnetic field can propagate perpendicular to it at speeds on the order of 1 km/s in the SOL or private flux region. The motion of these filaments has been studied in several tokamaks, including MAST, NSTX, and Alcator C-Mod. Filaments were also observed in the W7-X Stellarator using fast cameras during its initial run campaign. For W7-X's upcoming 2017-18 run campaign, we have installed a Phantom V710 fast camera with a view of the machine cross section and part of a divertor module in order to continue studying edge and divertor filaments. The view is coupled to the camera via a coherent fiber bundle. The Phantom camera is able to record at up to 400,000 frames per second and has a spatial resolution of roughly 2 cm in the view. A beam-splitter is used to share the view with a slower machine-protection camera. Stepping-motor actuators tilt the beam-splitter about two orthogonal axes, making it possible to frame user-defined sub-regions anywhere within the view. The diagnostic has been prepared to be remotely controlled via MDSplus. The MIT portion of this work is supported by US DOE award DE-SC0014251.

  20. Using the Theory of Games to Modelling the Equipment and Prices of Car Parking

    NASA Astrophysics Data System (ADS)

    Parkitny, Waldemar

    2017-10-01

    In large cities there are two serious problems connected with increasing number of cars. The first problem is the congestion of vehicles’ movement. The second one is too small of car parks, especially in centres of the cities. Authorities of cities and management of municipal streets introduce limitations in vehicles’ movement and reduce the number of car parks to minimalize streets crowd. That acting seems logical, but this is only the one point of view. From the other point of view municipal governments should aim to improve the level of the occupants’ life and assure the financial incomes, which enable to cover indispensable expenses. From this point of view, the municipal car parks are needed and bringing the profits element of municipal infrastructure. Cracow, which is one of the largest cities in Poland (about 760 thousands of occupants, and Cracovian agglomeration is about 1.4 million persons), was chosen as the object of the investigations. The zone of paid parking in Cracow, administered by the company belonging to city, has possessed 28837 parking places in 28.01.2016. In the zone there are assigned car parks or parking places near to the curbs and on pavements. The zone operates from Monday to Friday, from 10.00 to 20.00. Assuming using car parks only in 50% and fare of about 0.7 euro per hour, we receive incomes figuring out about 740000 euro/month. The purpose of the investigations was the identification of technical parameters of car parks being preferred by drivers. The investigations had been executed by method of questionnaires. Next the mathematical model of competition was made. The model was executed basing on the theory of games. Strategies of “Player 1” were prices and technical equipment of car parks and parking places lying in the zone of paid parking, administered by municipal company. Strategies of “Player 2” were prices and technical equipment of car parks belonging to private owners and two commercial centres in the city center. There were assumed that from the city’s point of view it is one rival, independently on the actual ownership of private car parks on whose strategy of acting the city do not have an influence. It is consistent with the basic foundations of the game theory. Both players compete for consumers - drivers using car parks, with such parameters like: price, distance from the destination, car parks’ equipment. The built model allows to indicate the best strategy. Knowing that strategy, one can form prices and equipment of car parks. That model may be used by municipal governments and companies which administer car parks. However, one should remember about limitations, which occur in reality, e.g. law restrictions referring to maximum prices for parking. The increasing pressure of cities’ authorities in Poland for changing those regulations, and examples of such solutions received, e.g. in USA, which concern varying prices for parking according to the demand on different car parks, and different hours, permit to have hopes, that the proposed model will be possible to use practically, soon.

  1. EXTERIOR VIEW WITH HISTORIC LOCOMOTIVES, COAL AND PASSENGER CARS INCLUDING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EXTERIOR VIEW WITH HISTORIC LOCOMOTIVES, COAL AND PASSENGER CARS INCLUDING THE WOODWARD IRON COMPANY NO. 38 LOCOMOTIVE AND TENDER LOCATED IN THE HEART OF DIXIE MUSEUM'S POWELL AVENUE YARD AND SOUTHERN RAILROAD BOXCARS ON ACTIVE TRACKS OF BIRMINGHAM'S RAILROAD RESERVATION. IN BACKGROUND AT RIGHT AND CENTER IS THE BIRMINGHAM CITY CENTER. - Heart of Dixie Railroad, Rolling Stock, 1800 Block Powell Avenue, Birmingham, Jefferson County, AL

  2. View of Crew Commander Henry Hartsfield Jr. loading film into IMAX camera

    NASA Image and Video Library

    1984-09-08

    41D-11-004 (8 September 1984 --- View of Crew Commander Henry Hartsfield Jr. loading film into the IMAX camera during the 41-D mission. The camera is floating in front of the middeck lockers. Above it is a sticker of the University of Kansas mascott, the Jayhawk.

  3. Barrier Coverage for 3D Camera Sensor Networks

    PubMed Central

    Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-01-01

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder’s face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks. PMID:28771167

  4. Barrier Coverage for 3D Camera Sensor Networks.

    PubMed

    Si, Pengju; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi; Ji, Peng; Chu, Hao

    2017-08-03

    Barrier coverage, an important research area with respect to camera sensor networks, consists of a number of camera sensors to detect intruders that pass through the barrier area. Existing works on barrier coverage such as local face-view barrier coverage and full-view barrier coverage typically assume that each intruder is considered as a point. However, the crucial feature (e.g., size) of the intruder should be taken into account in the real-world applications. In this paper, we propose a realistic resolution criterion based on a three-dimensional (3D) sensing model of a camera sensor for capturing the intruder's face. Based on the new resolution criterion, we study the barrier coverage of a feasible deployment strategy in camera sensor networks. Performance results demonstrate that our barrier coverage with more practical considerations is capable of providing a desirable surveillance level. Moreover, compared with local face-view barrier coverage and full-view barrier coverage, our barrier coverage is more reasonable and closer to reality. To the best of our knowledge, our work is the first to propose barrier coverage for 3D camera sensor networks.

  5. Interior view showing south entrance; camera facing south. Mare ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view showing south entrance; camera facing south. - Mare Island Naval Shipyard, Machine Shop, California Avenue, southwest corner of California Avenue & Thirteenth Street, Vallejo, Solano County, CA

  6. HDEV Flight Assembly

    NASA Image and Video Library

    2014-05-07

    View of the High Definition Earth Viewing (HDEV) flight assembly installed on the exterior of the Columbus European Laboratory module. Image was released by astronaut on Twitter. The High Definition Earth Viewing (HDEV) experiment places four commercially available HD cameras on the exterior of the space station and uses them to stream live video of Earth for viewing online. The cameras are enclosed in a temperature specific housing and are exposed to the harsh radiation of space. Analysis of the effect of space on the video quality, over the time HDEV is operational, may help engineers decide which cameras are the best types to use on future missions. High school students helped design some of the cameras' components, through the High Schools United with NASA to Create Hardware (HUNCH) program, and student teams operate the experiment.

  7. Dual-Pump Coherent Anti-Stokes Raman Scattering Temperature and CO2 Concentration Measurements

    NASA Technical Reports Server (NTRS)

    Lucht, Robert P.; Velur-Natarajan, Viswanathan; Carter, Campbell D.; Grinstead, Keith D., Jr.; Gord, James R.; Danehy, Paul M.; Fiechtner, G. J.; Farrow, Roger L.

    2003-01-01

    Measurements of temperature and CO2 concentration using dual-pump coherent anti-Stokes Raman scattering, (CARS) are described. The measurements were performed in laboratory flames,in a room-temperature gas cell, and on an engine test stand at the U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base. A modeless dye laser, a single-mode Nd:YAG laser, and an unintensified back-illuminated charge-coupled device digital camera were used for these measurements. The CARS measurements were performed on a single-laser-shot basis. The standard deviations of the temperatures and CO2 mole fractions determined from single-shot dual-pump CARS spectra in steady laminar propane/air flames were approximately 2 and 10% of the mean values of approximately 2000 K and 0.10, respectively. The precision and accuracy of single-shot temperature measurements obtained from the nitrogen part of the dual-pump CARS system were investigated in detail in near-adiabatic hydrogen/air/CO2 flames. The precision of the CARS temperature measurements was found to be comparable to the best results reported in the literature for conventional two-laser, single-pump CARS. The application of dual-pump CARS for single-shot measurements in a swirl-stabilized combustor fueled with JP-8 was also demonstrated.

  8. VIEW OF EAST ELEVATION; CAMERA FACING WEST Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF EAST ELEVATION; CAMERA FACING WEST - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  9. VIEW OF SOUTH ELEVATION; CAMERA FACING NORTH Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF SOUTH ELEVATION; CAMERA FACING NORTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  10. VIEW OF WEST ELEVATION: CAMERA FACING NORTHEAST Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF WEST ELEVATION: CAMERA FACING NORTHEAST - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  11. VIEW OF NORTH ELEVATION; CAMERA FACING SOUTH Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF NORTH ELEVATION; CAMERA FACING SOUTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  12. View of south elevation; camera facing northeast. Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of south elevation; camera facing northeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA

  13. View of north elevation; camera facing southeast. Mare Island ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of north elevation; camera facing southeast. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA

  14. Contextual view of building 733; camera facing southeast. Mare ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view of building 733; camera facing southeast. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA

  15. Oblique view of southeast corner; camera facing northwest. Mare ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Oblique view of southeast corner; camera facing northwest. - Mare Island Naval Shipyard, Defense Electronics Equipment Operating Center, I Street, terminus west of Cedar Avenue, Vallejo, Solano County, CA

  16. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  17. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    PubMed

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  18. a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment

    NASA Astrophysics Data System (ADS)

    Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan

    2016-06-01

    Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.

  19. The Environmental Impact of Cars: Children's Ideas and Reasoning.

    ERIC Educational Resources Information Center

    Stanisstreet, Martin; Boyes, Edward

    1997-01-01

    Reports on the results of a questionnaire administered to 14- and 15-year-old students (n=1637) in 25 British schools concerning their views on how car exhaust emissions affect global environmental problems. Children's beliefs discussed include the connection of car exhaust to global warming, the greenhouse effect, acid rain, and the ozone layer.…

  20. Interior view of second floor sleeping area; camera facing south. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view of second floor sleeping area; camera facing south. - Mare Island Naval Shipyard, Marine Barracks, Cedar Avenue, west side between Twelfth & Fourteenth Streets, Vallejo, Solano County, CA

  1. Quasi-microscope concept for planetary missions.

    PubMed

    Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D

    1977-09-01

    Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.

  2. MTR STACK, TRA710, CONTEXTUAL VIEW, CAMERA FACING SOUTH. PERIMETER SECURITY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MTR STACK, TRA-710, CONTEXTUAL VIEW, CAMERA FACING SOUTH. PERIMETER SECURITY FENCE AND SECURITY LIGHTING IN VIEW AT LEFT. INL NEGATIVE NO. HD52-1-1. Mike Crane, Photographer, 5/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  3. 1. VIEW OF ARVFS BUNKER TAKEN FROM GROUND ELEVATION. CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VIEW OF ARVFS BUNKER TAKEN FROM GROUND ELEVATION. CAMERA FACING NORTH. VIEW SHOWS PROFILE OF BUNKER IN RELATION TO NATURAL GROUND ELEVATION. TOP OF BUNKER HAS APPROXIMATELY THREE FEET OF EARTH COVER. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  4. View of camera station located northeast of Building 70022, facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of camera station located northeast of Building 70022, facing northwest - Naval Ordnance Test Station Inyokern, Randsburg Wash Facility Target Test Towers, Tower Road, China Lake, Kern County, CA

  5. Interior view of second floor lobby; camera facing south. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view of second floor lobby; camera facing south. - Mare Island Naval Shipyard, Hospital Headquarters, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA

  6. Interior view of second floor space; camera facing southwest. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view of second floor space; camera facing southwest. - Mare Island Naval Shipyard, Hospital Ward, Johnson Lane, west side at intersection of Johnson Lane & Cossey Street, Vallejo, Solano County, CA

  7. Interior view of north wing, south wall offices; camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view of north wing, south wall offices; camera facing south. - Mare Island Naval Shipyard, Smithery, California Avenue, west side at California Avenue & Eighth Street, Vallejo, Solano County, CA

  8. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  9. Low-cost thermo-electric infrared FPAs and their automotive applications

    NASA Astrophysics Data System (ADS)

    Hirota, Masaki; Ohta, Yoshimi; Fukuyama, Yasuhiro

    2008-04-01

    This paper describes three low-cost infrared focal plane arrays (FPAs) having a 1,536, 2,304, and 10,800 elements and experimental vehicle systems. They have a low-cost potential because each element consists of p-n polysilicon thermocouples, which allows the use of low-cost ultra-fine microfabrication technology commonly employed in the conventional semiconductor manufacturing processes. To increase the responsivity of FPA, we have developed a precisely patterned Au-black absorber that has high infrared absorptivity of more than 90%. The FPA having a 2,304 elements achieved high resposivity of 4,300 V/W. In order to reduce package cost, we developed a vacuum-sealed package integrated with a molded ZnS lens. The camera aiming the temperature measurement of a passenger cabin is compact and light weight devices that measures 45 x 45 x 30 mm and weighs 190 g. The camera achieves a noise equivalent temperature deviation (NETD) of less than 0.7°C from 0 to 40°C. In this paper, we also present a several experimental systems that use infrared cameras. One experimental system is a blind spot pedestrian warning system that employs four infrared cameras. It can detect the infrared radiation emitted from a human body and alerts the driver when a pedestrian is in a blind spot. The system can also prevent the vehicle from moving in the direction of the pedestrian. Another system uses a visible-light camera and infrared sensors to detect the presence of a pedestrian in a rear blind spot and alerts the driver. The third system is a new type of human-machine interface system that enables the driver to control the car's audio system without letting go of the steering wheel. Uncooled infrared cameras are still costly, which limits their automotive use to high-end luxury cars at present. To promote widespread use of IR imaging sensors on vehicles, we need to reduce their cost further.

  10. Analysis of car-to-bicycle approach patterns for developing active safety devices.

    PubMed

    Matsui, Yasuhiro; Oikawa, Shoko; Hitosugi, Masahito

    2016-05-18

    To reduce the severity of injuries and the number of cyclist deaths in traffic accidents, active safety devices providing cyclist detection are considered to be effective countermeasures. The features of car-to-bicycle collisions need to be known in detail to develop such safety devices. The study investigated near-miss situations captured by drive recorders installed in passenger cars. Because similarities in the approach patterns between near-miss incidents and real-world fatal cyclist accidents in Japan were confirmed, we analyzed the 229 near-miss incident data via video capturing bicycles crossing the road in front of forward-moving cars. Using a video frame captured by a drive recorder, the time to collision (TTC) was calculated from the car's velocity and the distance between the car and bicycle at the moment when the bicycle initially appeared. The average TTC in the cases where bicycles emerged from behind obstructions was shorter than that in the cases where drivers had unobstructed views of the bicycles. In comparing the TTC of car-to-bicycle near-miss incidents to the previously obtained results of car-to-pedestrian near-miss incidents, it was determined that the average TTC in car-to-bicycle near-miss incidents was significantly longer than that in car-to-pedestrian near-miss incidents. When considering the TTC in the test protocol of evaluation for safety performance of active safety devices, we propose individual TTCs for evaluation of cyclist and pedestrian detections, respectively. In the test protocols, the following 2 scenarios should be employed: bicycle emerging from behind an unobstructed view and bicycle emerging from behind obstructions.

  11. Contextual view of building 926 west elevation; camera facing east. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view of building 926 west elevation; camera facing east. - Mare Island Naval Shipyard, Wilderman Hall, Johnson Lane, north side adjacent to (south of) Hospital Complex, Vallejo, Solano County, CA

  12. Interior view of hallway on second floor; camera facing south. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Interior view of hallway on second floor; camera facing south. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA

  13. Contextual view of building 733 along Cedar Avenue; camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view of building 733 along Cedar Avenue; camera facing southwest. - Mare Island Naval Shipyard, WAVES Officers Quarters, Cedar Avenue, west side between Tisdale Avenue & Eighth Street, Vallejo, Solano County, CA

  14. View of main terrace with mature tree, camera facing southeast ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of main terrace with mature tree, camera facing southeast - Naval Training Station, Senior Officers' Quarters District, Naval Station Treasure Island, Yerba Buena Island, San Francisco, San Francisco County, CA

  15. View of steel warehouses, building 710 north sidewalk; camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of steel warehouses, building 710 north sidewalk; camera facing east. - Naval Supply Annex Stockton, Steel Warehouse Type, Between James & Humphreys Drives south of Embarcadero, Stockton, San Joaquin County, CA

  16. 18. Interior view, 'Inside Key Route Inspection Bldg.', view to ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Interior view, 'Inside Key Route Inspection Bldg.', view to east, August 16, 1939. Note inspection pits and work areas beneath tracks. The Key Route and Interurban Electric Railway Bridge Yard Shop buildings were identical, and this view provides an in-service look at the well-lit interior. While Key Route articulated cars were quite different in design from the Interurban Electric Railway cars, the maintenance requirements were quite similar. The Key Route Bridge Yard Shop building was demolished in the 1970s. Its last use was storage of much of the historic railway equipment now on display at the California State Railroad Museum in Sacramento. - Interurban Electric Railway Bridge Yard Shop, Interstate 80 at Alameda County Postmile 2.0, Oakland, Alameda County, CA

  17. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  18. North view; Interior view of covered ramp to Station Building ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    North view; Interior view of covered ramp to Station Building from Street Car Waiting House - North Philadelphia Station, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  19. 12. FIRST FLOOR CAR BARN SPACE, SHOWING COLUMNS AND ROOF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. FIRST FLOOR CAR BARN SPACE, SHOWING COLUMNS AND ROOF STRUCTURE. VIEW TO SOUTHEAST. - Commercial & Industrial Buildings, Key City Electric Street Railroad, Powerhouse & Storage Barn, Eighth & Washington Streets, Dubuque, Dubuque County, IA

  20. How much time do drivers need to obtain situation awareness? A laboratory-based study of automated driving.

    PubMed

    Lu, Zhenji; Coster, Xander; de Winter, Joost

    2017-04-01

    Drivers of automated cars may occasionally need to take back manual control after a period of inattentiveness. At present, it is unknown how long it takes to build up situation awareness of a traffic situation. In this study, 34 participants were presented with animated video clips of traffic situations on a three-lane road, from an egocentric viewpoint on a monitor equipped with eye tracker. Each participant viewed 24 videos of different durations (1, 3, 7, 9, 12, or 20 s). After each video, participants reproduced the end of the video by placing cars in a top-down view, and indicated the relative speeds of the placed cars with respect to the ego-vehicle. Results showed that the longer the video length, the lower the absolute error of the number of placed cars, the lower the total distance error between the placed cars and actual cars, and the lower the geometric difference between the placed cars and the actual cars. These effects appeared to be saturated at video lengths of 7-12 s. The total speed error between placed and actual cars also reduced with video length, but showed no saturation up to 20 s. Glance frequencies to the mirrors decreased with observation time, which is consistent with the notion that participants first estimated the spatial pattern of cars after which they directed their attention to individual cars. In conclusion, observers are able to reproduce the layout of a situation quickly, but the assessment of relative speeds takes 20 s or more. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Demonstration of a High-Fidelity Predictive/Preview Display Technique for Telerobotic Servicing in Space

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Bejczy, Antal K.

    1993-01-01

    A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.

  2. Exploring the World using Street View 360 Images

    NASA Astrophysics Data System (ADS)

    Bailey, J.

    2016-12-01

    The phrase "A Picture is Worth a Thousand Words" is an idiom of unknown 20th century origin. There is some belief that the modern use of the phrase stems from an article in a 1921 issue of a popular trade journal, that used "One Look is Worth A Thousand Words" to promote the use of images in advertisements on the sides of streetcars. There is a certain irony to this as nearly a century later the camera technologies on "Street View cars" are collecting images that look everywhere at once. However, while it can be to fun drive along the World's streets, it was the development of Street View imaging systems that could be mounted on other modes of transport or capture platforms (Street View Special Collects) that opened the door for these 360 images to become a tool for exploration and storytelling. Using Special Collect imagery captured in "off-road" and extreme locations, scientists are now using Street View images to assess changes to species habitats, show the impact of natural disasters and even perform "armchair" geology. A powerful example is the imagery captured before and after the 2011 earthquake and tsunami that devastated Japan. However, it is use of the immersive nature of 360 images that truly allows them to create wonder and awe, especially when combined with Virtual Reality (VR) viewers. Combined with the Street View App or Google Expeditions, VR provides insight into what it is like to swim with sealions in the Galapagos or climb El Capitan in Yosemite National Park. While these image could never replace experiencing these locations in real-life, they can inspire the viewer to explore and learn more about the many wonders of our planet. https://www.google.com/streetview/https://www.google.com/expeditions/

  3. Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST

    NASA Image and Video Library

    1993-12-06

    S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  4. 14. VIEW FROM TRACK AT NORTHEAST CORNER OF THE PLANT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. VIEW FROM TRACK AT NORTHEAST CORNER OF THE PLANT TOWARD A COAL TRESTLE EAST OF THE PLANT. THE TRESTLE WAS ADDED IN 1910. HOPPER CARS WERE WINCHED OVER A CRUSHER LOCATED DIRECTLY UNDER THE SHED AND DUMPED. A HEATED TRACK SECTION JUST BEYOND THE SHED WAS USED TO THAW FROZEN CARS IN COLD WEATHER. - New York, New Haven & Hartford Railroad, Cos Cob Power Plant, Sound Shore Drive, Greenwich, Fairfield County, CT

  5. INTERIOR VIEW OF FIRST STORY SPACE SHOWING CONCRETE BEAMS; CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    INTERIOR VIEW OF FIRST STORY SPACE SHOWING CONCRETE BEAMS; CAMERA FACING NORTH - Mare Island Naval Shipyard, Transportation Building & Gas Station, Third Street, south side between Walnut Avenue & Cedar Avenue, Vallejo, Solano County, CA

  6. A&M. Guard house (TAN638), contextual view. Built in 1968. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A&M. Guard house (TAN-638), contextual view. Built in 1968. Camera faces south. Guard house controlled access to radioactive waste storage tanks beyond and to left of view. Date: February 4, 2003. INEEL negative no. HD-33-4-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  7. 64. INTERIOR VIEW OF THE CARBIDE COOLING SHED. VIEW IS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    64. INTERIOR VIEW OF THE CARBIDE COOLING SHED. VIEW IS SHOWING CALCIUM CARBIDE IN COOLING CARS ON THE FLOOR. DECEMBER 26, 1918. - United States Nitrate Plant No. 2, Reservation Road, Muscle Shoals, Muscle Shoals, Colbert County, AL

  8. Lensless imaging for wide field of view

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Yagi, Yasushi

    2015-02-01

    It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.

  9. 47. Photocopy of plans. Draftsman unknown, 1916 ADDITION TO CAR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    47. Photocopy of plans. Draftsman unknown, 1916 ADDITION TO CAR BARN. LONGITUDINAL AND CROSS SECTIONS. VIEWS TO SOUTH AND WEST. - Milwaukee Light, Heat & Traction Company, 8336 West Lapham Street, West Allis, Milwaukee County, WI

  10. Stereoscopic camera and viewing systems with undistorted depth presentation and reduced or eliminated erroneous acceleration and deceleration perceptions, or with perceptions produced or enhanced for special effects

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1991-01-01

    Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.

  11. The Interdependence of Computers, Robots, and People.

    ERIC Educational Resources Information Center

    Ludden, Laverne; And Others

    Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…

  12. Test Image of Earth Rocks by Mars Camera Stereo

    NASA Image and Video Library

    2010-11-16

    This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.

  13. 7. DETAIL VIEW OF FIGUEROA STREET VIADUCT. SAME CAMERA POSITION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. DETAIL VIEW OF FIGUEROA STREET VIADUCT. SAME CAMERA POSITION AS CA-265-J-8. LOOKING 266°W. - Arroyo Seco Parkway, Figueroa Street Viaduct, Spanning Los Angeles River, Los Angeles, Los Angeles County, CA

  14. Application on-line imagery for photogrammetry comparison of natural hazards events

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Jaboyedoff, Michel; Derron, Marc-Henri

    2015-04-01

    The airborne (ALS) and terrestrial laser scanner (TLS) technologies are well known and actually one of the most common technics to obtain 3D terrain models. However those technologies are expensive and logistically demanding. Another way to obtain DEM without those inconveniences is photogrammetry, in particular the structure from motion (SfM) technic that allows high quality 3D model extraction from common digital camera images without need of a expensive material. If the usual way to get images for SfM 3D modelling is to take pictures on-site, on-line imagery offer the possibility to get images from many roads and other places. The most common on-line street view resource is Google Street View. Since April 2014, this service proposes a back-in-time function on a certain number of locations. Google Street View images are composed from many pictures taken with a set of panoramic cameras mounted on a platform like a car roof. Those images are affected by strong deformations, which are not recommended for photogrammetry. At first sight, using street view images to make photogrammetry may bring some processing problems. The aim of this project is to study the possibility to made SfM 3D model from Google Street View images with open source processing software (Visual SFM) and low-cost software (Agisoft). The main interest of this method is to evaluate at low cost changes without terrain visit. Study areas are landslides (such those of Séchilienne in France) and cliffs near or far away from roads. Human-made terrain changes like stone wall collapse by high rain precipitations near of Monaco are also studied. For each case, 50 to 200 pictures have been used. The mains conditions to obtain 3D model results are: to have a street view image of the area of interest. Some countries like USA or France are well documented. Other countries like Switzerland are only partially or not at all like Germany. The second constraint is to have two or more sets of images at different time. Third condition is to have enough quality images. Over- or underexposed images, bad meteorological conditions (fog, rain, etc.) or bad images resolution compromise the SfM process. In our case studies, distances from the road to the object of interest range from 1 to 500 m. First results show that SfM processing with on-line images is not obvious. Poor-resolution and deformed images with unknown camera features make the process often difficult and not predictive. The use of Agisoft software give bad results because of the abovementioned features while Visual SFM give interesting results for about two thirds of cases. It is also demonstrated that 3D photogrammetry is possible with on-line images under certain restrictive conditions. Under these conditions of images quality, this technique can then be used to estimate volumes changes.

  15. High-Resolution Large Field-of-View FUV Compact Camera

    NASA Technical Reports Server (NTRS)

    Spann, James F.

    2006-01-01

    The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.

  16. PROCESS WATER BUILDING, TRA605. CONTEXTUAL VIEW, CAMERA FACING SOUTHEAST. PROCESS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PROCESS WATER BUILDING, TRA-605. CONTEXTUAL VIEW, CAMERA FACING SOUTHEAST. PROCESS WATER BUILDING AND ETR STACK ARE IN LEFT HALF OF VIEW. TRA-666 IS NEAR CENTER, ABUTTED BY SECURITY BUILDING; TRA-626, AT RIGHT EDGE OF VIEW BEHIND BUS. INL NEGATIVE NO. HD46-34-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  17. Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

    NASA Astrophysics Data System (ADS)

    Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo

    2010-02-01

    A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

  18. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    PubMed

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.

  19. 31. THIS PARTIAL VIEW OF HULETT NO. 4, LOOKING WEST, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. THIS PARTIAL VIEW OF HULETT NO. 4, LOOKING WEST, SHOWS THE RAILS AND WHEELS WHICH PERMIT THE UNLOADERS' LATERAL MOVEMENT ALONG THE FACE OF THE DOCK. ELECTRIC SHUNT CARS, SUCH AS THE ONE SEEN HERE, SHUTTLE EMPTY RAIL CARS INTO POSITION BENEATH THE HOPPERS AND ASSEMBLE THEM INTO TRAINS WHEN THEY ARE FULL. - Pennsylvania Railway Ore Dock, Lake Erie at Whiskey Island, approximately 1.5 miles west of Public Square, Cleveland, Cuyahoga County, OH

  20. Conceptual design of a neutron camera for MAST Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiszflog, M., E-mail: matthias.weiszflog@physics.uu.se; Sangaroon, S.; Cecconello, M.

    2014-11-15

    This paper presents two different conceptual designs of neutron cameras for Mega Ampere Spherical Tokamak (MAST) Upgrade. The first one consists of two horizontal cameras, one equatorial and one vertically down-shifted by 65 cm. The second design, viewing the plasma in a poloidal section, also consists of two cameras, one radial and the other one with a diagonal view. Design parameters for the different cameras were selected on the basis of neutron transport calculations and on a set of target measurement requirements taking into account the predicted neutron emissivities in the different MAST Upgrade operating scenarios. Based on a comparisonmore » of the cameras’ profile resolving power, the horizontal cameras are suggested as the best option.« less

  1. DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF A VIDEO CAMERA POSITIONED ALONG THE PERIMETER OF THE MLP - Cape Canaveral Air Force Station, Launch Complex 39, Mobile Launcher Platforms, Launcher Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  2. 2. View from same camera position facing 232 degrees southwest ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. View from same camera position facing 232 degrees southwest showing abandoned section of old grade - Oak Creek Administrative Center, One half mile east of Zion-Mount Carmel Highway at Oak Creek, Springdale, Washington County, UT

  3. See around the corner using active imaging

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Elmqvist, Magnus; Larsson, Håkan

    2011-11-01

    This paper investigates the prospects of "seeing around the corner" using active imaging. A monostatic active imaging system offers interesting capabilities in the presence of glossy reflecting objects. Examples of such surfaces are windows in buildings and cars, calm water, signs and vehicle surfaces. During daylight it might well be possible to use mirrorlike reflection by the naked eye or a CCD camera for non-line of sight imaging. However the advantage with active imaging is that one controls the illumination. This will not only allow for low light and night utilization but also for use in cases where the sun or other interfering lights limit the non-line of sight imaging possibility. The range resolution obtained by time gating will reduce disturbing direct reflections and allow simultaneous view in several directions using range discrimination. Measurements and theoretical considerations in this report support the idea of using laser to "see around the corner". Examples of images and reflectivity measurements will be presented together with examples of potential system applications.

  4. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    NASA Astrophysics Data System (ADS)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  5. DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM ESOUTH, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL VIEW OF VIDEO CAMERA, MAIN FLOOR LEVEL, PLATFORM E-SOUTH, HB-3, FACING SOUTHWEST - Cape Canaveral Air Force Station, Launch Complex 39, Vehicle Assembly Building, VAB Road, East of Kennedy Parkway North, Cape Canaveral, Brevard County, FL

  6. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    NASA Astrophysics Data System (ADS)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  7. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  8. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  9. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  10. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  11. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    NASA Astrophysics Data System (ADS)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  12. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  13. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    PubMed

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  14. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  15. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  16. 17. Photocopy of a photograph, source and date unknown GENERAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. Photocopy of a photograph, source and date unknown GENERAL VIEW OF FRONT FACADE OF MT. CLARE STATION; PASSENGER CAR SHOP IN REAR - Baltimore & Ohio Railroad, Mount Clare Passenger Car Shop, Southwest corner of Pratt & Poppleton Streets, Baltimore, Independent City, MD

  17. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  18. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    PubMed Central

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods. PMID:29143764

  19. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.

    PubMed

    Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying

    2018-01-15

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.

  20. A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †

    PubMed Central

    Shen, Ju; Xu, Wanxin; Luo, Ying

    2018-01-01

    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968

  1. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  2. The New Cloud Absorption Radiometer (CAR) Software: One Model for NASA Remote Sensing Virtual Instruments

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rapchun, David A.; Jones, Hollis H.

    2001-01-01

    The Cloud Absorption Radiometer (CAR) instrument has been the most frequently used airborne instrument built in-house at NASA Goddard Space Flight Center, having flown scientific research missions on-board various aircraft to many locations in the United States, Azores, Brazil, and Kuwait since 1983. The CAR instrument is capable of measuring scattered light by clouds in fourteen spectral bands in UV, visible and near-infrared region. This document describes the control, data acquisition, display, and file storage software for the new version of CAR. This software completely replaces the prior CAR Data System and Control Panel with a compact and robust virtual instrument computer interface. Additionally, the instrument is now usable for the first time for taking data in an off-aircraft mode. The new instrument is controlled via a LabVIEW v5. 1.1-developed software interface that utilizes, (1) serial port writes to write commands to the controller module of the instrument, and (2) serial port reads to acquire data from the controller module of the instrument. Step-by-step operational procedures are provided in this document. A suite of other software programs has been developed to complement the actual CAR virtual instrument. These programs include: (1) a simulator mode that allows pretesting of new features that might be added in the future, as well as demonstrations to CAR customers, and development at times when the instrument/hardware is off-location, and (2) a post-experiment data viewer that can be used to view all segments of individual data cycles and to locate positions where 'start' and stop' byte sequences were incorrectly formulated by the instrument controller. The CAR software described here is expected to be the basis for CAR operation for many missions and many years to come.

  3. A wide-angle camera module for disposable endoscopy

    NASA Astrophysics Data System (ADS)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  4. Automatic Camera Calibration Using Multiple Sets of Pairwise Correspondences.

    PubMed

    Vasconcelos, Francisco; Barreto, Joao P; Boyer, Edmond

    2018-04-01

    We propose a new method to add an uncalibrated node into a network of calibrated cameras using only pairwise point correspondences. While previous methods perform this task using triple correspondences, these are often difficult to establish when there is limited overlap between different views. In such challenging cases we must rely on pairwise correspondences and our solution becomes more advantageous. Our method includes an 11-point minimal solution for the intrinsic and extrinsic calibration of a camera from pairwise correspondences with other two calibrated cameras, and a new inlier selection framework that extends the traditional RANSAC family of algorithms to sampling across multiple datasets. Our method is validated on different application scenarios where a lack of triple correspondences might occur: addition of a new node to a camera network; calibration and motion estimation of a moving camera inside a camera network; and addition of views with limited overlap to a Structure-from-Motion model.

  5. 3. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY, CAMERA FACING NORTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY, CAMERA FACING NORTHEAST. SHOWS RELATIONSHIP BETWEEN DECONTAMINATION ROOM, ADSORBER REMOVAL HATCHES (FLAT ON GRADE), AND BRIDGE CRANE. INEEL PROOF NUMBER HD-17-2. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID

  6. Machine vision based teleoperation aid

    NASA Technical Reports Server (NTRS)

    Hoff, William A.; Gatrell, Lance B.; Spofford, John R.

    1991-01-01

    When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.

  7. Use of a microscope-mounted wide-angle point of view camera to record optimal hand position in ocular surgery.

    PubMed

    Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K

    2014-07-01

    We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. Statis omnidirectional stereoscopic display system

    NASA Astrophysics Data System (ADS)

    Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.

    1999-11-01

    A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.

  9. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  10. ETR CRITICAL FACILITY, TRA654. CONTEXTUAL VIEW. CAMERA ON ROOF OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR CRITICAL FACILITY, TRA-654. CONTEXTUAL VIEW. CAMERA ON ROOF OF MTR BUILDING AND FACING SOUTH. ETR AND ITS COOLANT BUILDING AT UPPER PART OF VIEW. ETR COOLING TOWER NEAR TOP EDGE OF VIEW. EXCAVATION AT CENTER IS FOR ETR CF. CENTER OF WHICH WILL CONTAIN POOL FOR REACTOR. NOTE CHOPPER TUBE PROCEEDING FROM MTR IN LOWER LEFT OF VIEW, DIAGONAL TOWARD LEFT. INL NEGATIVE NO. 56-4227. Jack L. Anderson, Photographer, 12/18/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  11. Effect of image scaling on stereoscopic movie experience

    NASA Astrophysics Data System (ADS)

    Häkkinen, Jukka P.; Hakala, Jussi; Hannuksela, Miska; Oittinen, Pirkko

    2011-03-01

    Camera separation affects the perceived depth in stereoscopic movies. Through control of the separation and thereby the depth magnitudes, the movie can be kept comfortable but interesting. In addition, the viewing context has a significant effect on the perceived depth, as a larger display and longer viewing distances also contribute to an increase in depth. Thus, if the content is to be viewed in multiple viewing contexts, the depth magnitudes should be carefully planned so that the content always looks acceptable. Alternatively, the content can be modified for each viewing situation. To identify the significance of changes due to the viewing context, we studied the effect of stereoscopic camera base distance on the viewer experience in three different situations: 1) small sized video and a viewing distance of 38 cm, 2) television and a viewing distance of 158 cm, and 3) cinema and a viewing distance of 6-19 meters. We examined three different animations with positive parallax. The results showed that the camera distance had a significant effect on the viewing experience in small display/short viewing distance situations, in which the experience ratings increased until the maximum disparity in the scene was 0.34 - 0.45 degrees of visual angle. After 0.45 degrees, increasing the depth magnitude did not affect the experienced quality ratings. Interestingly, changes in the camera distance did not affect the experience ratings in the case of television or cinema if the depth magnitudes were below one degree of visual angle. When the depth was greater than one degree, the experience ratings began to drop significantly. These results indicate that depth magnitudes have a larger effect on the viewing experience with a small display. When a stereoscopic movie is viewed from a larger display, other experiences might override the effect of depth magnitudes.

  12. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  13. Martian Terrain Near Curiosity Precipice Target

    NASA Image and Video Library

    2016-12-06

    This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140

  14. Explosives Instrumentation Group Trial 6/77-Propellant Fire Trials (Series Two).

    DTIC Science & Technology

    1981-10-01

    frames/s. A 19 mm Sony U-Matic video cassette recorder (VCR) and camera were used to view the hearth from a tower 100 m from ground-zero (GZ). Normal...camera started. This procedure permitted increased recording time of the event. A 19 mm Sony U-Matic VCR and camera was used to view the container...Lumpur, Malaysia Exchange Section, British Library, U.K. Periodicals Recording Section, Science Reference Library, British Library, U.K. Library, Chemical

  15. Polar Views of Planet Earth.

    ERIC Educational Resources Information Center

    Brochu, Michel

    1983-01-01

    In August, 1981, National Aeronautics and Space Administration launched Dynamics Explorer 1 into polar orbit equipped with three cameras built to view the Northern Lights. The cameras can photograph aurora borealis' faint light without being blinded by the earth's bright dayside. Photographs taken by the satellite are provided. (JN)

  16. Late afternoon view of the interior of the westernmost wall ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Late afternoon view of the interior of the westernmost wall section to be removed; camera facing north. (Note: lowered camera position significantly to minimize background distractions including the porta-john, building, and telephone pole) - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  17. HOT CELL BUILDING, TRA632. CONTEXTUAL VIEW ALONG WALLEYE AVENUE, CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    HOT CELL BUILDING, TRA-632. CONTEXTUAL VIEW ALONG WALLEYE AVENUE, CAMERA FACING EASTERLY. HOT CELL BUILDING IS AT CENTER LEFT OF VIEW; THE LOW-BAY PROJECTION WITH LADDER IS THE TEST TRAIN ASSEMBLY FACILITY, ADDED IN 1968. MTR BUILDING IS IN LEFT OF VIEW. HIGH-BAY BUILDING AT RIGHT IS THE ENGINEERING TEST REACTOR BUILDING, TRA-642. INL NEGATIVE NO. HD46-32-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  18. Localization and Mapping Using a Non-Central Catadioptric Camera System

    NASA Astrophysics Data System (ADS)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  19. Split-probe hybrid femtosecond/picosecond rotational CARS for time-domain measurement of S-branch Raman linewidths within a single laser shot.

    PubMed

    Patterson, Brian D; Gao, Yi; Seeger, Thomas; Kliewer, Christopher J

    2013-11-15

    We introduce a multiplex technique for the single-laser-shot determination of S-branch Raman linewidths with high accuracy and precision by implementing hybrid femtosecond (fs)/picosecond (ps) rotational coherent anti-Stokes Raman spectroscopy (CARS) with multiple spatially and temporally separated probe beams derived from a single laser pulse. The probe beams scatter from the rotational coherence driven by the fs pump and Stokes pulses at four different probe pulse delay times spanning 360 ps, thereby mapping collisional coherence dephasing in time for the populated rotational levels. The probe beams scatter at different folded BOXCARS angles, yielding spatially separated CARS signals which are collected simultaneously on the charge coupled device camera. The technique yields a single-shot standard deviation (1σ) of less than 3.5% in the determination of Raman linewidths and the average linewidth values obtained for N(2) are within 1% of those previously reported. The presented technique opens the possibility for correcting CARS spectra for time-varying collisional environments in operando.

  20. Tracking people and cars using 3D modeling and CCTV.

    PubMed

    Edelman, Gerda; Bijhold, Jurrien

    2010-10-10

    The aim of this study was to find a method for the reconstruction of movements of people and cars using CCTV footage and a 3D model of the environment. A procedure is proposed, in which video streams are synchronized and displayed in a 3D model, by using virtual cameras. People and cars are represented by cylinders and boxes, which are moved in the 3D model, according to their movements as shown in the video streams. The procedure was developed and tested in an experimental setup with test persons who logged their GPS coordinates as a recording of the ground truth. Results showed that it is possible to implement this procedure and to reconstruct movements of people and cars from video recordings. The procedure was also applied to a forensic case. In this work we experienced that more situational awareness was created by the 3D model, which made it easier to track people on multiple video streams. Based on all experiences from the experimental set up and the case, recommendations are formulated for use in practice. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Photogrammetry System and Method for Determining Relative Motion Between Two Bodies

    NASA Technical Reports Server (NTRS)

    Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)

    2014-01-01

    A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.

  2. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.

  3. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  4. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  5. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  6. 1. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY. CAMERA FACING NORTHEAST. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. CONTEXTUAL VIEW OF WASTE CALCINING FACILITY. CAMERA FACING NORTHEAST. ON RIGHT OF VIEW IS PART OF EARTH/GRAVEL SHIELDING FOR BIN SET. AERIAL STRUCTURE MOUNTED ON POLES IS PNEUMATIC TRANSFER SYSTEM FOR DELIVERY OF SAMPLES BEING SENT FROM NEW WASTE CALCINING FACILITY TO THE CPP REMOTE ANALYTICAL LABORATORY. INEEL PROOF NUMBER HD-17-1. - Idaho National Engineering Laboratory, Old Waste Calcining Facility, Scoville, Butte County, ID

  7. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    DTIC Science & Technology

    2018-02-15

    23 9 Ground truth creation based on marked building feature points in two different views 50 frames apart in...between just two views , each row in the current figure represents a similar assessment however between one camera and all other cameras within the dataset...BA4S. While Fig. 44 depicted the epipolar lines for the point correspondences between just two views , the current figure represents a similar

  8. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  9. Two Titans

    NASA Image and Video Library

    2017-08-11

    These two views of Saturn's moon Titan exemplify how NASA's Cassini spacecraft has revealed the surface of this fascinating world. Cassini carried several instruments to pierce the veil of hydrocarbon haze that enshrouds Titan. The mission's imaging cameras also have several spectral filters sensitive to specific wavelengths of infrared light that are able to make it through the haze to the surface and back into space. These "spectral windows" have enable the imaging cameras to map nearly the entire surface of Titan. In addition to Titan's surface, images from both the imaging cameras and VIMS have provided windows into the moon's ever-changing atmosphere, chronicling the appearance and movement of hazes and clouds over the years. A large, bright and feathery band of summer clouds can be seen arcing across high northern latitudes in the view at right. These views were obtained with the Cassini spacecraft narrow-angle camera on March 21, 2017. Images taken using red, green and blue spectral filters were combined to create the natural-color view at left. The false-color view at right was made by substituting an infrared image (centered at 938 nanometers) for the red color channel. The views were acquired at a distance of approximately 613,000 miles (986,000 kilometers) from Titan. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21624

  10. LOFT. Interior view of entry (TAN624) rollup door. Camera is ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT. Interior view of entry (TAN-624) rollup door. Camera is inside entry building facing south. Rollup door was a modification of the original ANP door arrangement. Date: March 2004. INEEL negative no. HD-39-5-2 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  11. Response, Emergency Staging, Communications, Uniform Management, and Evacuation (R.E.S.C.U.M.E.) : report on functional and performance requirements, and high-level data and communication needs.

    DOT National Transportation Integrated Search

    1995-06-01

    INTELLIGENT VEHICLE INITIATIVE OR IVI ABSTRACT THE GOAL OF THE TRAVTEK CAMERA CAR STUDY WAS TO FURNISH A DETAILED EVALUATION OF DRIVING AND NAVIGATION PERFORMANCE, SYSTEM USABILITY, AND SAFETY FOR THE TRAVTEK SYSTEM. TO ACHIEVE THIS GOAL, AN INSTRUME...

  12. Crewmember in the middeck beside the Commercial Generic Bioprocessing exp.

    NASA Image and Video Library

    1993-01-19

    STS054-07-003 (13-19 Jan 1993) --- Astronaut John H. Casper, mission commander, floats near the Commercial Generic Bioprocessing Apparatus (CGBA) station on Endeavour's middeck. A friction car and its accompanying loop -- part of the Toys in Space package onboard -- can be seen just above Casper's head. The photograph was taken with a 35mm camera.

  13. 11. CALIFORNIATYPE DEPRESSION BEAM: Photocopy of photograph showing a Californiatype ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. CALIFORNIA-TYPE DEPRESSION BEAM: Photocopy of photograph showing a California-type depression beam positioned in its yokes. A car would approach the beam moving towards the camera. Note the open access cover, pulleys, counterweight hatchcover, and the wooden construction of the beam. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  14. 21. ORE DOCK, LOOKING SOUTHWEST. THIS VIEW SHOWS THE WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. ORE DOCK, LOOKING SOUTHWEST. THIS VIEW SHOWS THE WEST END OF THE DOCK. EMPTY CARS ARE MOVED IN FROM THE WEST BY 'SHUNT CARS,' PUT INTO PLACE AS NEEDED BENEATH THE HULETTS, FILLED, THEN SHUNTED TO THE EAST END OF THE YARD WHERE THEY ARE MADE UP INTO TRAINS. THE POWER HOUSE (WITH TALL ARCHED WINDOWS) AND THE TWO-STORY DOCK OFFICE CAN BE SEEN HERE. - Pennsylvania Railway Ore Dock, Lake Erie at Whiskey Island, approximately 1.5 miles west of Public Square, Cleveland, Cuyahoga County, OH

  15. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  16. Crisis in Amateur Sports Organizations Viewed by Change Agent Research (CAR).

    ERIC Educational Resources Information Center

    Guilmette, Ann Marie; Moriarty, Dick

    The Sports Institute for Research Through Change Agent Research (SIR/CAR) provides a service whereby organizations through an audit and feedback system prognosticate and identify problems in order to avoid situations discordant with their organizational goals and objectives. This document reports the organizational crisis that faced the Windsor…

  17. Evaluation of electric vehicles as an alternative for work-trip and limited business commutes final report

    DOT National Transportation Integrated Search

    1999-12-01

    This report presents the results of a four-year evaluation of an electric subcompact car. The principal finding was that the 1995-model electric car must be viewed in two contexts, the body/chassis/drive train and the battery/recharge system. Firstly...

  18. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  19. Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera

    NASA Astrophysics Data System (ADS)

    Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.

    2017-09-01

    Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.

  20. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  1. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.

    PubMed

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-06-06

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  2. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    PubMed Central

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  3. 50. INTERIOR VIEW TO SOUTH AT FIRST FLOOR MACHINE SHOP: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    50. INTERIOR VIEW TO SOUTH AT FIRST FLOOR MACHINE SHOP: Interior view towards south of the machine shop located on the first floor of the powerhouse and car barn. Compare this view with CA-12-74, taken seventeen years earlier. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  4. IMAX camera in payload bay

    NASA Image and Video Library

    1995-12-20

    STS074-361-035 (12-20 Nov 1995) --- This medium close-up view centers on the IMAX Cargo Bay Camera (ICBC) and its associated IMAX Camera Container Equipment (ICCE) at its position in the cargo bay of the Earth-orbiting Space Shuttle Atlantis. With its own ?space suit? or protective covering to protect it from the rigors of space, this version of the IMAX was able to record scenes not accessible with the in-cabin cameras. For docking and undocking activities involving Russia?s Mir Space Station and the Space Shuttle Atlantis, the camera joined a variety of in-cabin camera hardware in recording the historical events. IMAX?s secondary objectives were to film Earth views. The IMAX project is a collaboration between NASA, the Smithsonian Institution?s National Air and Space Museum (NASM), IMAX Systems Corporation, and the Lockheed Corporation to document significant space activities and promote NASA?s educational goals using the IMAX film medium.

  5. REACTOR SERVICE BUILDING, TRA635, CONTEXTUAL VIEW DURING CONSTRUCTION. CAMERA IS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    REACTOR SERVICE BUILDING, TRA-635, CONTEXTUAL VIEW DURING CONSTRUCTION. CAMERA IS ATOP MTR BUILDING AND LOOKING SOUTHERLY. FOUNDATION AND DRAINS ARE UNDER CONSTRUCTION. THE BUILDING WILL BUTT AGAINST CHARGING FACE OF PLUG STORAGE BUILDING. HOT CELL BUILDING, TRA-632, IS UNDER CONSTRUCTION AT TOP CENTER OF VIEW. INL NEGATIVE NO. 8518. Unknown Photographer, 8/25/1953 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  6. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  7. PBF Cooling Tower contextual view. Camera facing southwest. West wing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Cooling Tower contextual view. Camera facing southwest. West wing and north facade (rear) of Reactor Building (PER-620) is at left; Cooling Tower to right. Photographer: Kirsh. Date: November 2, 1970. INEEL negative no. 70-4913 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  8. PBF Reactor Building (PER620). Aerial view of early construction. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Reactor Building (PER-620). Aerial view of early construction. Camera facing northwest. Excavation and concrete placement in two basements are underway. Note exposed lava rock. Photographer: Farmer. Date: March 22, 1965. INEEL negative no. 65-2219 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  9. 26. VIEW OF METAL SHED OVER SHIELDING TANK WITH CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. VIEW OF METAL SHED OVER SHIELDING TANK WITH CAMERA FACING SOUTHWEST. SHOWS OPEN SIDE OF SHED ROOF, HERCULON SHEET, AND HAND-OPERATED CRANE. TAKEN IN 1983. INEL PHOTO NUMBER 83-476-2-9, TAKEN IN 1983. PHOTOGRAPHER NOT NAMED. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  10. 29. INTERIOR VIEW OF FLUX HANDLING BUILDING, LOOKING SOUTH AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. INTERIOR VIEW OF FLUX HANDLING BUILDING, LOOKING SOUTH AT THE VIBRATING CAR SHAKER. - U.S. Steel Duquesne Works, Basic Oxygen Steelmaking Plant, Along Monongahela River, Duquesne, Allegheny County, PA

  11. A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roquemore, A; Maingi, R; Lasnier, C

    2007-06-19

    In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less

  12. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  13. The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.

    PubMed

    Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco

    2015-01-01

    Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.

  14. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    PubMed

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  15. Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens

    PubMed Central

    Fletcher, Daniel A.

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188

  16. CARS measurement of vibrational and rotational temperature with high power laser and high speed visualization of total radiation behind hypervelocity shock waves of 5-7km/s

    NASA Astrophysics Data System (ADS)

    Sakurai, Kotaro; Bindu, Venigalla Hima; Niinomi, Shota; Ota, Masanori; Maeno, Kazuo

    2010-09-01

    Coherent Anti-Stokes Raman Spectroscopy (CARS) method is commonly used for measuring molecular structure or condition. In the aerospace technology, this method is applies to measure the temperature in thermic fluid with relatively long time duration of millisecond or sub millisecond. On the other hand, vibrational/rotational temperatures behind hypervelocity shock wave are important for heat-shield design in phase of reentry flight. The non-equilibrium flow with radiative heating from strongly shocked air ahead of the vehicles plays an important role on the heat flux to the wall surface structure as well as convective heating. In this paper CARS method is applied to measure the vibrational/rotational temperature of N2 behind hypervelocity shock wave. The strong shock wave in front of the reentering space vehicles can be experimentally realigned by free-piston, double-diaphragm shock tube with low density test gas. However CARS measurement is difficult for our experiment. Our measurement needs very short pulse which order of nanosecond and high power laser for CARS method. It is due to our measurement object is the momentary phenomena which velocity is 7km/s. In addition the observation section is low density test gas, and there is the strong background light behind the shock wave. So we employ the CARS method with high power, order of 1J/pulse, and very short pulse (10ns) laser. By using this laser the CARS signal can be acquired even in the strong radiation area. Also we simultaneously try to use the CCD camera to obtain total radiation with CARS method.

  17. 26. EAST FRONT AND SOUTH SIDE OF F&CH RWY POWERHOUSE: ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. EAST FRONT AND SOUTH SIDE OF F&CH RWY POWERHOUSE: Photocopy of a recently discovered c. 1904 photograph showing south side and east front of powerhouse and car barn. View is looking north along Mason Street. Cars exited the building and passed onto the mainline through the large doorway just to the right of the smokestack. Note the cable car descending Washington Street past the building. - San Francisco Cable Railway, Washington & Mason Streets, San Francisco, San Francisco County, CA

  18. Listening to "Car Talk" and Learning Liberal Arts

    ERIC Educational Resources Information Center

    Goldstein, Warren

    2004-01-01

    In this article, Warren Goldstein, himself a college professor addresses a pair of radio talk show hosts, Tom and Ray, and their views on current topics such as environmental issues, cars, relationships between men and women, and specifically their criticisms of liberal arts programs. Tom and Ray have been critical of liberal arts because they…

  19. Hubble Space Telescope photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  20. Occlusion handling framework for tracking in smart camera networks by per-target assistance task assignment

    NASA Astrophysics Data System (ADS)

    Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.

  1. Recent advances in multiview distributed video coding

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  2. Calibration Target for Curiosity Arm Camera

    NASA Image and Video Library

    2012-09-10

    This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.

  3. Do advertisements at the roadside distract the driver?

    NASA Astrophysics Data System (ADS)

    Kettwich, Carmen; Klinger, Karsten; Lemmer, Uli

    2008-04-01

    Nowadays drivers have to get along with an increasing complex visual environment. More and more cars are on the road. There are not only distractions available within the vehicle, like radio and navigation system, the environment outside the car has also become more and more complex. Hoardings, advertising pillars, shop fronts and video screens are just a few examples. For this reason the potential risk of driver distraction is rising. But in which way do the advertisements at the roadside influence the driver's attention? The investigation which is described is devoted to this topic. Various kinds of advertisements played an important role, like illuminated and non-illuminated posters as well as illuminated animated ads. Several test runs in an urban environment were performed. The gaze direction of the driver's eye was measured with an eye tracking system. The latter consists of three cameras which logged the eye movements during the test run and a small-sized scene camera recording the traffic scene. 16 subjects (six female and ten male) between 21 and 65 years of age took part in this experiment. Thus the driver's fixation duration of the different advertisements could be determined.

  4. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  5. Imaging experiment: The Viking Lander

    USGS Publications Warehouse

    Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Morris, E.C.; Sagan, C.; Young, A.T.

    1972-01-01

    The Viking Lander Imaging System will consist of two identical facsimile cameras. Each camera has a high-resolution mode with an instantaneous field of view of 0.04??, and survey and color modes with instantaneous fields of view of 0.12??. Cameras are positioned one meter apart to provide stereoscopic coverage of the near-field. The Imaging Experiment will provide important information about the morphology, composition, and origin of the Martian surface and atmospheric features. In addition, lander pictures will provide supporting information for other experiments in biology, organic chemistry, meteorology, and physical properties. ?? 1972.

  6. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    PubMed

    Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F

    2010-03-16

    Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive.

  7. Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene

    PubMed Central

    Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.

    2010-01-01

    Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. PMID:20300522

  8. Validation of Viewing Reports: Exploration of a Photographic Method.

    ERIC Educational Resources Information Center

    Fletcher, James E.; Chen, Charles Chao-Ping

    A time lapse camera loaded with Super 8 film was employed to photographically record the area in front of a conventional television receiver in selected homes. The camera took one picture each minute for three days, including in the same frame the face of the television receiver. Family members kept a conventional viewing diary of their viewing…

  9. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  10. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  11. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  12. An electronic pan/tilt/zoom camera system

    NASA Technical Reports Server (NTRS)

    Zimmermann, Steve; Martin, H. Lee

    1991-01-01

    A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.

  13. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  14. Accuracy of smartphone apps for heart rate measurement.

    PubMed

    Coppetti, Thomas; Brauchlin, Andreas; Müggler, Simon; Attinger-Toller, Adrian; Templin, Christian; Schönrath, Felix; Hellermann, Jens; Lüscher, Thomas F; Biaggi, Patric; Wyss, Christophe A

    2017-08-01

    Background Smartphone manufacturers offer mobile health monitoring technology to their customers, including apps using the built-in camera for heart rate assessment. This study aimed to test the diagnostic accuracy of such heart rate measuring apps in clinical practice. Methods The feasibility and accuracy of measuring heart rate was tested on four commercially available apps using both iPhone 4 and iPhone 5. 'Instant Heart Rate' (IHR) and 'Heart Fitness' (HF) work with contact photoplethysmography (contact of fingertip to built-in camera), while 'Whats My Heart Rate' (WMH) and 'Cardiio Version' (CAR) work with non-contact photoplethysmography. The measurements were compared to electrocardiogram and pulse oximetry-derived heart rate. Results Heart rate measurement using app-based photoplethysmography was performed on 108 randomly selected patients. The electrocardiogram-derived heart rate correlated well with pulse oximetry ( r = 0.92), IHR ( r = 0.83) and HF ( r = 0.96), but somewhat less with WMH ( r = 0.62) and CAR ( r = 0.60). The accuracy of app-measured heart rate as compared to electrocardiogram, reported as mean absolute error (in bpm ± standard error) was 2 ± 0.35 (pulse oximetry), 4.5 ± 1.1 (IHR), 2 ± 0.5 (HF), 7.1 ± 1.4 (WMH) and 8.1 ± 1.4 (CAR). Conclusions We found substantial performance differences between the four studied heart rate measuring apps. The two contact photoplethysmography-based apps had higher feasibility and better accuracy for heart rate measurement than the two non-contact photoplethysmography-based apps.

  15. High-Resolution Mars Camera Test Image of Moon Infrared

    NASA Image and Video Library

    2005-09-13

    This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.

  16. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  17. Infrared and visible cooperative vehicle identification markings

    NASA Astrophysics Data System (ADS)

    O'Keefe, Eoin S.; Raven, Peter N.

    2006-05-01

    Airborne surveillance helicopters and aeroplanes used by security and defence forces around the world increasingly rely on their visible band and thermal infrared cameras to prosecute operations such as the co-ordination of police vehicles during the apprehension of a stolen car, or direction of all emergency services at a serious rail crash. To perform their function effectively, it is necessary for the airborne officers to unambiguously identify police and the other emergency service vehicles. In the visible band, identification is achieved by placing high contrast symbols and characters on the vehicle roof. However, at the wavelengths at which thermal imagers operate, the dark and light coloured materials have similar low reflectivity and the visible markings cannot be discerned. Hence there is a requirement for a method of passively and unobtrusively marking vehicles concurrently in the visible and thermal infrared, over a large range of viewing angles. In this paper we discuss the design, detailed angle-dependent spectroscopic characterisation and operation of novel visible and infrared vehicle marking materials, and present airborne IR and visible imagery of materials in use.

  18. Study on road sign recognition in LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2016-02-01

    Road and traffic sign identification is a field of study that can be used to aid the development of in-car advisory systems. It uses computer vision and artificial intelligence to extract the road signs from outdoor images acquired by a camera in uncontrolled lighting conditions where they may be occluded by other objects, or may suffer from problems such as color fading, disorientation, variations in shape and size, etc. An automatic means of identifying traffic signs, in these conditions, can make a significant contribution to develop an Intelligent Transport Systems (ITS) that continuously monitors the driver, the vehicle, and the road. Road and traffic signs are characterized by a number of features which make them recognizable from the environment. Road signs are located in standard positions and have standard shapes, standard colors, and known pictograms. These characteristics make them suitable for image identification. Traffic sign identification covers two problems: traffic sign detection and traffic sign recognition. Traffic sign detection is meant for the accurate localization of traffic signs in the image space, while traffic sign recognition handles the labeling of such detections into specific traffic sign types or subcategories [1].

  19. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  20. Optimising Camera Traps for Monitoring Small Mammals

    PubMed Central

    Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

  1. Multiplex coherent anti-Stokes Raman scattering microspectroscopy for monitoring molecular structural change in biological samples

    NASA Astrophysics Data System (ADS)

    Ohta, Takayuki; Hashizume, Hiroshi; Takeda, Keigo; Ishikawa, Kenji; Ito, Masafumi; Hori, Masaru

    2014-10-01

    Biological applications employing non-equilibrium plasma processing has been attracted much attention. It is essential to monitor the changes in an intracellular structure of the cell during the plasma exposure. In this study, we have analyzed the molecular structure of biological samples using multiplex coherent anti-Stokes Raman scattering (CARS) microspectroscopy. Two picosecond pulse lasers with fundamental (1064 nm) or the supercontinuum (460-2200 nm) were employed as a pump and Stokes beams of multiplex CARS microspectroscopy, respectively. The pump and the Stokes laser beams were collinearly overlapped and tightly focused into a sample using an objective lens of high numerical aperture. The CARS signal was collected by another microscope objective lens which is placed facing the first one. After passing through a short pass filter, the signal was dispersed by a polychromator, and was detected by a charge-coupled device camera. The sample was sandwiched by a coverslip and a glass bottom dish for the measurements and was placed on a piezo stage. The CARS signals of the quinhydrone crystal at 1655, 1584, 1237 and 1161 cm-1 were assigned to the C-C, C =O stretching, O-H and C-O stretching vibrational modes, respectively.

  2. PVC-based synthetic leather to provide more comfortable and sustainable vehicles

    NASA Astrophysics Data System (ADS)

    Maia, I.; Santos, J.; Abreu, MJ; Miranda, T.; Carneiro, N.; Soares, GMB

    2017-10-01

    Consumers are increasingly demanding the interior of cars to be comfortable even in the case of more economic commercial segments. Thus, the development of materials with thermoregulation properties has assumed renewed interest for these particular applications. An attempt has been made to prepare a multilayer PVC-based synthetic leather with paraffinic PCMs to be applied on a car seat. The thermal behaviour of the material was analysed using Alambeta apparatus, a thermo-camera and a thermal manikin. The results obtained show that the synthetic leather with incorporated PCMs gives cooler feeling and has higher reaction times regarding environmental temperature variations than the material without PCMs incorporation. Globally, the new designed material allowed greater thermal comfort to the cars´ inhabitants. In addition, the material quality was evaluated according to the standard of the customer, BMW 9,210,275; Edition / Version 4, 2010-10-01 revealing that the material meets all the requirements under test, except for the performance in terms of flexibility.

  3. Improving CAR Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  4. Improving Car Navigation with a Vision-Based System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  5. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  6. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    NASA Technical Reports Server (NTRS)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  7. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  8. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George

    1986-01-07

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  9. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George

    1986-01-01

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  10. SEOS frame camera applications study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A research and development satellite is discussed which will provide opportunities for observation of transient phenomena that fall within the fixed viewing circle of the spacecraft. The evaluation of possible applications for frame cameras, for SEOS, are studied. The computed lens characteristics for each camera are listed.

  11. Rotatable prism for pan and tilt

    NASA Technical Reports Server (NTRS)

    Ball, W. B.

    1980-01-01

    Compact, inexpensive, motor-driven prisms change field of view of TV camera. Camera and prism rotate about lens axis to produce pan effect. Rotating prism around axis parallel to lens produces tilt. Size of drive unit and required clearance are little more than size of camera.

  12. Morning view, contextual view showing the road and gate to ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Morning view, contextual view showing the road and gate to be widened; view taken from the statue area with the camera facing north. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  13. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  14. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  15. True 3-D View of 'Columbia Hills' from an Angle

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.'

    The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.

  16. 1. Context view includes Building 59 (second from left). Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Context view includes Building 59 (second from left). Camera is pointed ENE along Farragut Aveune. Buildings on left side of street are, from left: Building 856, Building 59 and Building 107. On right side of street they are, from right; Building 38, Building 452 and Building 460. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  17. Power Burst Facility (PBF), PER620, contextual and oblique view. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Power Burst Facility (PBF), PER-620, contextual and oblique view. Camera facing northwest. South and east facade. The 1980 west-wing expansion is left of center bay. Concrete structure at right is PER-730. Date: March 2004. INEEL negative no. HD-41-2-3 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  18. 10. Southeast end; view to northwest, 65mm lens. Note evidence ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Southeast end; view to northwest, 65mm lens. Note evidence of extreme building failure caused by adjacent railroad cut, which necessitated building demolition. (Vignetting due to extreme use of camera swing necessitated by lack of space to position camera otherwise.) - Benicia Arsenal, Powder Magazine No. 5, Junction of Interstate Highways 680 & 780, Benicia, Solano County, CA

  19. LOFT. Containment building (TAN650) detail. Camera facing east. Service building ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT. Containment building (TAN-650) detail. Camera facing east. Service building corner is at left of view above personnel access. Round feature at left of dome is tank that will contain borated water. Metal stack at right of view. Date: 1973. INEEL negative no. 73-1085 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  20. Optomechanical Design of Ten Modular Cameras for the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Ford, Virginia G.; Karlmann, Paul; Hagerott, Ed; Scherr, Larry

    2003-01-01

    This viewgraph presentation reviews the design and fabrication of the modular cameras for the Mars Exploration Rovers. In the 2003 mission there were to be 2 landers and 2 rovers, each were to have 10 cameras each. Views of the camera design, the lens design, the lens interface with the detector assembly, the detector assembly, the electronics assembly are shown.

  1. Contributions of Head-Mounted Cameras to Studying the Visual Environments of Infants and Young Children

    ERIC Educational Resources Information Center

    Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.

    2015-01-01

    Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…

  2. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    ERIC Educational Resources Information Center

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  3. Integrated multi sensors and camera video sequence application for performance monitoring in archery

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali

    2018-03-01

    This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.

  4. Driver monitoring system for automotive safety

    NASA Astrophysics Data System (ADS)

    Lörincz, A. E.; Risteiu, M. N.; Ionica, A.; Leba, M.

    2018-01-01

    The lifestyle of a person is a very active one from all points of view. He travels great distance every day, with car or on foot. Tiredness and stress is found in every person. These can cause major problems when driving up and driving in small or big distances by car. A system developed to prevent the dangers we are prone to in these situations is very useful. System that can be used and implemented both in the production of current cars and the use of those not equipped with this system.

  5. Whose point-of-view is it anyway?

    NASA Astrophysics Data System (ADS)

    Garvey, Gregory P.

    2011-03-01

    Shared virtual worlds such as Second Life privilege a single point-of-view, namely that of the user. When logged into Second Life a user sees the virtual world from a default viewpoint, which is from slightly above and behind the user's avatar (the user's alter ego 'in-world.') This point-of-view is as if the user were viewing his or her avatar using a camera floating a few feet behind it. In fact it is possible to set the view to as if you were seeing the world through the eyes of your avatar or you can even move the camera completely independent of your avatar. A change in point-of-view, means, more than just a different camera point-of-view. The practice of using multiple avatars requires a transformation of identity and personality. When a user 'enacts' the identity of a particular avatar, their 'real' personality is masked by the assumed personality. The technology of virtual worlds permits both a change of point-of -view and also facilitates a change in identity. Does this cause any psychological distress? Or is the ability to be someone else and see a world (a game, a virtual world) through a different set of eyes somehow liberating and even beneficial?

  6. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  7. Reflex-free digital fundus photography using a simple and portable camera adaptor system. A viable alternative.

    PubMed

    Pirie, Chris G; Pizzirani, Stefano

    2011-12-01

    To describe a digital single lens reflex (dSLR) camera adaptor for posterior segment photography. A total of 30 normal canine and feline animals were imaged using a dSLR adaptor which mounts between a dSLR camera body and lens. Posterior segment viewing and imaging was performed with the aid of an indirect lens ranging from 28-90D. Coaxial illumination for viewing was provided by a single white light emitting diode (LED) within the adaptor, while illumination during exposure was provided by the pop-up flash or an accessory flash. Corneal and/or lens reflections were reduced using a pair of linear polarizers, having their azimuths perpendicular to one another. Quality high-resolution, reflection-free, digital images of the retina were obtained. Subjective image evaluation demonstrated the same amount of detail, as compared to a conventional fundus camera. A wide range of magnification(s) [1.2-4X] and/or field(s) of view [31-95 degrees, horizontal] were obtained by altering the indirect lens utilized. The described adaptor may provide an alternative to existing fundus camera systems. Quality images were obtained and the adapter proved to be versatile, portable and of low cost.

  8. Use of wildlife webcams - Literature review and annotated bibliography

    USGS Publications Warehouse

    Ratz, Joan M.; Conk, Shannon J.

    2010-01-01

    The U.S. Fish and Wildlife Service National Conservation Training Center requested a literature review product that would serve as a resource to natural resource professionals interested in using webcams to connect people with nature. The literature review focused on the effects on the public of viewing wildlife through webcams and on information regarding installation and use of webcams. We searched the peer reviewed, published literature for three topics: wildlife cameras, virtual tourism, and technological nature. Very few publications directly addressed the effect of viewing wildlife webcams. The review of information on installation and use of cameras yielded information about many aspects of the use of remote photography, but not much specifically regarding webcams. Aspects of wildlife camera use covered in the literature review include: camera options, image retrieval, system maintenance and monitoring, time to assemble, power source, light source, camera mount, frequency of image recording, consequences for animals, and equipment security. Webcam technology is relatively new and more publication regarding the use of the technology is needed. Future research should specifically study the effect that viewing wildlife through webcams has on the viewers' conservation attitudes, behaviors, and sense of connectedness to nature.

  9. View towards Foundry and Pattern Storage Room, blocked by 3 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View towards Foundry and Pattern Storage Room, blocked by 3 bay hopper coal cars in foreground - East Broad Top Railroad & Coal Company, State Route 994, West of U.S. Route 522, Rockhill Furnace, Huntingdon County, PA

  10. VIEW LOOKING NORTHEAST SHOWING TIPPLE FOR LOADING COKED COAL INTO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW LOOKING NORTHEAST SHOWING TIPPLE FOR LOADING COKED COAL INTO RAILROAD CARS (FRONT), COAL STORAGE BIN AND TIPPLE FOR COAL TO BE CHARGED IN FURNACES (BACK) - Alverton Coke Works, State Route 981, Alverton, Westmoreland County, PA

  11. 1. GENERAL VIEW OF COKE WORKS LOOKING WEST, SHOWING OVENS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. GENERAL VIEW OF COKE WORKS LOOKING WEST, SHOWING OVENS IN FOREGROUND, LARRY CAR TIPPLE TO THE RIGHT, AND COAL TIPPLE IN CENTERGROUND - Lucernemines Coke Works, 0.2 mile East of Lucerne, Lucerne Mines, Indiana County, PA

  12. 90. VIEW NORTHEAST OF BUILDING 98 OIL HOUSE AND STORAGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    90. VIEW NORTHEAST OF BUILDING 98 OIL HOUSE AND STORAGE TANK; OIL FOR POWER GENERATION WAS UNLOADED FROM TANK CARS AND STORED AT THIS FACILITY - Scovill Brass Works, 59 Mill Street, Waterbury, New Haven County, CT

  13. North view general left to right; Mechanical Building, roof ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    North view - general left to right; Mechanical Building, roof of Station Building, covered ramp, and Street Car Waiting House - North Philadelphia Station, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  14. North view; Station Building south (front) elevation, portions of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    North view; Station Building - south (front) elevation, portions of covered ramp and Street Car Waiting House at near left - North Philadelphia Station, 2900 North Broad Street, on northwest corner of Broad Street & Glenwood Avenue, Philadelphia, Philadelphia County, PA

  15. Morning view, brick post detail; view also shows dimensional wallconstruction ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Morning view, brick post detail; view also shows dimensional wall-construction detail. North wall, with the camera facing northwest. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  16. STS-37 Breakfast / Ingress / Launch & ISO Camera Views

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The primary objective of the STS-37 mission was to deploy the Gamma Ray Observatory. The mission was launched at 9:22:44 am on April 5, 1991, onboard the space shuttle Atlantis. The mission was led by Commander Steven Nagel. The crew was Pilot Kenneth Cameron and Mission Specialists Jerry Ross, Jay Apt, and Linda Godwing. This videotape shows the crew having breakfast on the launch day, with the narrator introducing them. It then shows the crew's final preparations and the entry into the shuttle, while the narrator gives information about each of the crew members. The countdown and launch is shown including the shuttle separation from the solid rocket boosters. The launch is reshown from 17 different camera views. Some of the other camera views were in black and white.

  17. Evaluation of camera-based systems to reduce transit bus side collisions : phase II.

    DOT National Transportation Integrated Search

    2012-12-01

    The sideview camera system has been shown to eliminate blind zones by providing a view to the driver in real time. In : order to provide the best integration of these systems, an integrated camera-mirror system (hybrid system) was : developed and tes...

  18. Detection of Humans and Light Vehicles Using Acoustic-to-Seismic Coupling

    DTIC Science & Technology

    2009-08-31

    microphones, video cameras (regular and infrared), magnetic sensors, and active Doppler radar and sonar systems. These sensors could be located at... sonar systems due to dramatic absorption/reflection of electromagnetic/ultrasonic waves [8,9]. 6...engine was turned off, and the car continued moving. This eliminated the engine sound. A PCB microphone, 377B41, with preamplifier , 426A30, and with

  19. System of technical vision for autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  20. ETR, TRA642, CAMERA IS BELOW, BUT NEAR THE CEILING OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR, TRA-642, CAMERA IS BELOW, BUT NEAR THE CEILING OF THE GROUND FLOOR, AND LOOKS DOWN TOWARD THE CONSOLE FLOOR. CAMERA FACES WESTERLY. THE REACTOR PIT IS IN THE CENTER OF THE VIEW. BEYOND IT TO THE LEFT IS THE SOUTH SIDE OF THE WORKING CANAL. IN THE FOREGROUND ON THE RIGHT IS THE SHIELDING FOR THE PROCESS WATER TUNNEL AND PIPING. SPIRAL STAIRCASE AT LEFT OF VIEW. INL NEGATIVE NO. 56-2237. Jack L. Anderson, Photographer, 7/6/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  1. Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.

    2017-01-01

    Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.

  2. 2. PERSPECTIVE VIEW OF WEST AND SOUTH SIDES, LOOKING EAST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. PERSPECTIVE VIEW OF WEST AND SOUTH SIDES, LOOKING EAST DOWN 12TH STREET (820 Twelfth Street in background across Ninth Avenue) - Twelfth Street Car Shops, Fire Engine House No. 7, 1128 Ninth Avenue, Altoona, Blair County, PA

  3. CONTEXT VIEW ALONG EXISTING PERIMETER TRACKS LOOKING OVER IRON ORE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONTEXT VIEW ALONG EXISTING PERIMETER TRACKS LOOKING OVER IRON ORE CARS TOWARDS CLEVELAND BULK TERMINAL BUILDINGS. LOOKING SOUTH. - Pennsylvania Railway Ore Dock, Lake Erie at Whiskey Island, approximately 1.5 miles west of Public Square, Cleveland, Cuyahoga County, OH

  4. PERSPECTIVE VIEW LOOKING SOUTHWEST AT THE EAST SIDE OF THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PERSPECTIVE VIEW LOOKING SOUTHWEST AT THE EAST SIDE OF THE CYANAMIDE (L-N) OVEN BUILDING. PIECES OF A DISASSEMBLED RAIL CAR IN FOREGROUND. - United States Nitrate Plant No. 2, Reservation Road, Muscle Shoals, Muscle Shoals, Colbert County, AL

  5. 18. HISTORIC VIEW OF MAX VALIER, FOUNDING MEMBER OF THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. HISTORIC VIEW OF MAX VALIER, FOUNDING MEMBER OF THE VEREIN FUER RAUMSCHIFFAHRT (GERMAN SOCIETY FOR SPACE TRAVEL), DRIVES HIS ROCKET CAR IN 1931. - Marshall Space Flight Center, Redstone Rocket (Missile) Test Stand, Dodd Road, Huntsville, Madison County, AL

  6. Left Panorama of Spirit's Landing Site

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Left Panorama of Spirit's Landing Site

    This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.

  7. 1. Contextual view to west of the Southern Pacific Railroad ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Contextual view to west of the Southern Pacific Railroad Carlin Shops buildings at Carlin, Nevada. Visible beneath the pedestrian bridge are the Engine Stores Building (HAER NV-26-A) left, Oil House (HAER NV-26-B) left center, and Roundhouse Machine Shop Extension (HAER NV-26-C) center background. The work train cars at right consist of the Boom Tender normally coupled to the wrecking crane and over which the crane's boom hung during travel, and a former Harriman standard-design Railway Post Office car (90mm lens). - Southern Pacific Railroad, Carlin Shops, Foot of Sixth Street, Carlin, Elko County, NV

  8. A stroboscopic technique for using CCD cameras in flow visualization systems for continuous viewing and stop action photography

    NASA Technical Reports Server (NTRS)

    Franke, John M.; Rhodes, David B.; Jones, Stephen B.; Dismond, Harriet R.

    1992-01-01

    A technique for synchronizing a pulse light source to charge coupled device cameras is presented. The technique permits the use of pulse light sources for continuous as well as stop action flow visualization. The technique has eliminated the need to provide separate lighting systems at facilities requiring continuous and stop action viewing or photography.

  9. LOFT. Interior view of entry to reactor building, TAN650. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT. Interior view of entry to reactor building, TAN-650. Camera is inside entry (TAN-624) and facing north. At far end of domed chamber are penetrations in wall for electrical and other connections. Reactor and other equipment has been removed. Date: March 2004. INEEL negative no. HD-39-5-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  10. MTR BUILDING INTERIOR, TRA603. BASEMENT. CAMERA IN WEST CORRIDOR FACING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MTR BUILDING INTERIOR, TRA-603. BASEMENT. CAMERA IN WEST CORRIDOR FACING SOUTH. FREIGHT ELEVATOR IS AT RIGHT OF VIEW. AT CENTER VIEW IS MTR VAULT NO. 1, USED TO STORE SPECIAL OR FISSIONABLE MATERIALS. INL NEGATIVE NO. HD46-6-3. Mike Crane, Photographer, 2/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  11. LOFT complex in 1975 awaits renewed mission. Aerial view. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT complex in 1975 awaits renewed mission. Aerial view. Camera facing southwesterly. Left to right: stack, entry building (TAN-624), door shroud, duct shroud and filter hatches, dome (painted white), pre-amp building, equipment and piping building, shielded control room (TAN-630), airplane hangar (TAN-629). Date: 1975. INEEL negative no. 75-3690 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  12. 77 FR 38391 - Mercedes-Benz USA, LLC, and Daimler AG (DAG), Receipt of Petition for Decision of Inconsequential...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-27

    ... platform) passenger cars do not fully comply with paragraph S4.4 TPMS Malfunction of Federal Motor Vehicle... the issue is (``Wheel Sensor(s) Missing''), the vehicle depicts an aerial view of a car with the...). \\1\\ Mercedes-Benz USA, LLC, and Daimler AG are motor vehicle manufacturers and importers. Mercedes...

  13. Real-time augmented reality overlay for an energy-efficient car study

    NASA Astrophysics Data System (ADS)

    Wozniak, Peter; Javahiraly, Nicolas; Curticapean, Dan

    2017-06-01

    Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.

  14. Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Queen, Steven M.; Sanner, Kurt Gregory

    2011-01-01

    One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.

  15. Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr

    2008-01-01

    The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to that of an endo-scissor, but the linkage would be configured to enable adjustment of the camera angle instead of actuation of a scissor blade. It is envisioned that thicknesses of the tool shaft and the camera would be less than 4 mm, so that the camera-tipped tool could be swiftly inserted and withdrawn through a dime-size opening. Electronic cameras having dimensions of the order of millimeters are already commercially available, but their designs are not optimized for use in endoscopic brain surgery. The variety of potential endoscopic, thoracoscopic, and laparoscopic applications can be expected to increase as further development of electronic cameras yields further miniaturization and improvements in imaging performance.

  16. Video Mosaicking for Inspection of Gas Pipelines

    NASA Technical Reports Server (NTRS)

    Magruder, Darby; Chien, Chiun-Hong

    2005-01-01

    A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.

  17. Development of infrared scene projectors for testing fire-fighter cameras

    NASA Astrophysics Data System (ADS)

    Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.

    2008-04-01

    We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.

  18. Morning view, contextual view showing unpaved corridor down the westernmost ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Morning view, contextual view showing unpaved corridor down the westernmost lane where the wall section (E) will be removed; camera facing north-northwest. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  19. Context-based handover of persons in crowd and riot scenarios

    NASA Astrophysics Data System (ADS)

    Metzler, Jürgen

    2015-02-01

    In order to control riots in crowds, it is helpful to get ringleaders under control and pull them out of the crowd if one has become an offender. A great support to achieve these tasks is the capability of observing the crowd and ringleaders automatically by using cameras. It also allows a better conservation of evidence in riot control. A ringleader who has become an offender should be tracked across and recognized by several cameras, regardless of whether overlapping camera's fields of view exist or not. We propose a context-based approach for handover of persons between different camera fields of view. This approach can be applied for overlapping as well as for non-overlapping fields of view, so that a fast and accurate identification of individual persons in camera networks is feasible. Within the scope of this paper, the approach is applied to a handover of persons between single images without having any temporal information. It is particularly developed for semiautomatic video editing and a handover of persons between cameras in order to improve conservation of evidence. The approach has been developed on a dataset collected during a Crowd and Riot Control (CRC) training of the German armed forces. It consists of three different levels of escalation. First, the crowd started with a peaceful demonstration. Later, there were violent protests, and third, the riot escalated and offenders bumped into the chain of guards. One result of the work is a reliable context-based method for person re-identification between single images of different camera fields of view in crowd and riot scenarios. Furthermore, a qualitative assessment shows that the use of contextual information can support this task additionally. It can decrease the needed time for handover and the number of confusions which supports the conservation of evidence in crowd and riot scenarios.

  20. 3D digital image correlation using single color camera pseudo-stereo system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  1. 31. DETAIL VIEW OF ELECTRIC WINCH AND CABLE SYSTEM, SOUTH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    31. DETAIL VIEW OF ELECTRIC WINCH AND CABLE SYSTEM, SOUTH SIDE OF MILL HOUSE, USED TO WINCH RAIL CARS INTO LOADING POSITION, LOOKING NORTHEAST - Sperry Corn Elevator Complex, Weber Avenue (North side), West of Edison Street, Stockton, San Joaquin County, CA

  2. 1. GENERAL VIEW OF RAILROAD YARD LOOKING NORTH. Office and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. GENERAL VIEW OF RAILROAD YARD LOOKING NORTH. Office and Car and Wheel Shops to left, Engine House No. 1 to right. Ebensburg Processing Plant and Powerhouse (Colver Mine) in far left background. - Cambria & Indiana Railroad, Colver, Cambria County, PA

  3. A filter spectrometer concept for facsimile cameras

    NASA Technical Reports Server (NTRS)

    Jobson, D. J.; Kelly, W. L., IV; Wall, S. D.

    1974-01-01

    A concept which utilizes interference filters and photodetector arrays to integrate spectrometry with the basic imagery function of a facsimile camera is described and analyzed. The analysis considers spectral resolution, instantaneous field of view, spectral range, and signal-to-noise ratio. Specific performance predictions for the Martian environment, the Viking facsimile camera design parameters, and a signal-to-noise ratio for each spectral band equal to or greater than 256 indicate the feasibility of obtaining a spectral resolution of 0.01 micrometers with an instantaneous field of view of about 0.1 deg in the 0.425 micrometers to 1.025 micrometers range using silicon photodetectors. A spectral resolution of 0.05 micrometers with an instantaneous field of view of about 0.6 deg in the 1.0 to 2.7 micrometers range using lead sulfide photodetectors is also feasible.

  4. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  5. Heterogeneity of carotenoid content and composition in LH2 of the purple sulphur bacterium Allochromatium minutissimum grown under carotenoid-biosynthesis inhibition.

    PubMed

    Makhneva, Zoya; Bolshakov, Maksim; Moskalenko, Andrey

    2008-01-01

    The effects brought about by growing Allochromatium (Alc.) minutissimum in the presence of different concentrations of the carotenoid (Car) biosynthetic inhibitor diphenylamine (DPA) have been investigated. A decrease of Car content (from approximately 70% to >5%) in the membranes was accompanied by an increase of the percentage of (immature) Cars with reduced numbers of conjugated C=C bonds (from neurosporene to phytoene). Based on the obtained results and the analysis of literature data, the conclusion is reached that accumulation of phytoene during inhibition did not occur. Surprisingly, DPA inhibited phytoene synthase instead of phytoene desaturase as generally assumed. The distribution of Cars in peripheral antenna (LH2) complexes and their effect on the stability of LH2 has been investigated using absorption spectroscopy and HPLC analysis. Heterogeneity of Car composition and contents in the LH2 pool is revealed. The Car contents in LH2 varied widely from control levels to complete absence. According to common view, the assembly of LH2 occurs only in the presence of Cars. Here, we show that the LH2 can be assembled without any Cars. The presence of Cars, however, is important for structural stability of LH2 complexes.

  6. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera

    NASA Technical Reports Server (NTRS)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.

  7. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  8. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  9. Continuous monitoring of Hawaiian volcanoes with thermal cameras

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.

    2014-01-01

    Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.

  10. Brief Report: Using a Point-of-View Camera to Measure Eye Gaze in Young Children with Autism Spectrum Disorder during Naturalistic Social Interactions--A Pilot Study

    ERIC Educational Resources Information Center

    Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.

    2017-01-01

    Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…

  11. ENGINEERING TEST REACTOR (ETR) BUILDING, TRA642. CONTEXTUAL VIEW, CAMERA FACING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ENGINEERING TEST REACTOR (ETR) BUILDING, TRA-642. CONTEXTUAL VIEW, CAMERA FACING EAST. VERTICAL METAL SIDING. ROOF IS SLIGHTLY ELEVATED AT CENTER LINE FOR DRAINAGE. WEST SIDE OF ETR COMPRESSOR BUILDING, TRA-643, PROJECTS TOWARD LEFT AT FAR END OF ETR BUILDING. INL NEGATIVE NO. HD46-37-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  12. IET. Aerial view of snaptran destructive experiment in 1964. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Aerial view of snaptran destructive experiment in 1964. Camera facing north. Test cell building (TAN-624) is positioned away from coupling station. Weather tower in right foreground. Divided duct just beyond coupling station. Air intake structure on south side of shielded control room. Experiment is on dolly at coupling station. Date: 1964. INEEL negative no. 64-1736 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  13. F-15A Remotely Piloted Research Vehicle (RPRV)/Spin Research Vehicle(SRV) launch and flight

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This 33-second film clip begins with the release of the F-15 RPRV from the wing pylon of the NASA Dryden NB-52B carrier aircraft. Then a downward camera view just after release from the pylon, a forward camera view from the F-15 RPRV nose, and followed by air-to-air footage of an actual F-15 vehicle executing spin maneuvers.

  14. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  15. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  16. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    PubMed

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  17. Voyager spacecraft images of Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Birnbaum, M. M.

    1982-01-01

    The Voyager imaging system is described, noting that it is made up of a narrow-angle and a wide-angle TV camera, each in turn consisting of optics, a filter wheel and shutter assembly, a vidicon tube, and an electronics subsystem. The narrow-angle camera has a focal length of 1500 mm; its field of view is 0.42 deg and its focal ratio is f/8.5. For the wide-angle camera, the focal length is 200 mm, the field of view 3.2 deg, and the focal ratio of f/3.5. Images are exposed by each camera through one of eight filters in the filter wheel on the photoconductive surface of a magnetically focused and deflected vidicon having a diameter of 25 mm. The vidicon storage surface (target) is a selenium-sulfur film having an active area of 11.14 x 11.14 mm; it holds a frame consisting of 800 lines with 800 picture elements per line. Pictures of Jupiter, Saturn, and their moons are presented, with short descriptions given of the area being viewed.

  18. 1. General view, east end and north side. View to ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. General view, east end and north side. View to southwest toward San Francisco-Oakland Bay Bridge, along general alignment of former Interurban Electric Railway main line. Note truncated catenary support arms on power poles; these originally carried overhead power supply catenary line for the electrically-powered interurban cars. - Interurban Electric Railway Bridge Yard Shop, Interstate 80 at Alameda County Postmile 2.0, Oakland, Alameda County, CA

  19. 3D digital image correlation using a single 3CCD colour camera and dichroic filter

    NASA Astrophysics Data System (ADS)

    Zhong, F. Q.; Shao, X. X.; Quan, C.

    2018-04-01

    In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.

  20. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  1. Differences in glance behavior between drivers using a rearview camera, parking sensor system, both technologies, or no technology during low-speed parking maneuvers.

    PubMed

    Kidd, David G; McCartt, Anne T

    2016-02-01

    This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  3. Cameras on the moon with Apollos 15 and 16.

    NASA Technical Reports Server (NTRS)

    Page, T.

    1972-01-01

    Description of the cameras used for photography and television by Apollo 15 and 16 missions, covering a hand-held Hasselblad camera for black and white panoramic views at locations visited by the astronauts, a special stereoscopic camera designed by astronomer Tom Gold, a 16-mm movie camera used on the Apollo 15 and 16 Rovers, and several TV cameras. Details are given on the far-UV camera/spectrograph of the Apollo 16 mission. An electronographic camera converts UV light to electrons which are ejected by a KBr layer at the focus of an f/1 Schmidt camera and darken photographic films much more efficiently than far-UV. The astronomical activity of the Apollo 16 astronauts on the moon, using this equipment, is discussed.

  4. 4. VIEW OF SOUTH SIDE OF TIPPLE AND CLEANING PLANT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. VIEW OF SOUTH SIDE OF TIPPLE AND CLEANING PLANT LOOKING NORTH. TO LEFT IS CAR REPAIR SHOP AND POWER PLANT. - Eureka No. 40, Tipple & Cleaning Plant, East of State Route 56, north of Little Paint Creek, Scalp Level, Cambria County, PA

  5. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    PubMed

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.

  6. Pedestrian injury mitigation by autonomous braking.

    PubMed

    Rosén, Erik; Källhammer, Jan-Erik; Eriksson, Dick; Nentwich, Matthias; Fredriksson, Rikard; Smith, Kip

    2010-11-01

    The objective of this study was to calculate the potential effectiveness of a pedestrian injury mitigation system that autonomously brakes the car prior to impact. The effectiveness was measured by the reduction of fatally and severely injured pedestrians. The database from the German In-Depth Accident Study (GIDAS) was queried for pedestrians hit by the front of cars from 1999 to 2007. Case by case information on vehicle and pedestrian velocities and trajectories were analysed to estimate the field of view needed for a vehicle-based sensor to detect the pedestrians one second prior to the crash. The pre-impact braking system was assumed to activate the brakes one second prior to crash and to provide a braking deceleration up to the limit of the road surface conditions, but never to exceed 0.6 g. New impact speeds were then calculated for pedestrians that would have been detected by the sensor. These calculations assumed that all pedestrians who were within a given field of view but not obstructed by surrounding objects would be detected. The changes in fatality and severe injury risks were quantified using risk curves derived by logistic regression of the accident data. Summing the risks for all pedestrians, relationships between mitigation effectiveness, sensor field of view, braking initiation time, and deceleration were established. The study documents that the effectiveness at reducing fatally (severely) injured pedestrians in frontal collisions with cars reached 40% (27%) at a field of view of 40 degrees. Increasing the field of view further led to only marginal improvements in effectiveness. 2010 Elsevier Ltd. All rights reserved.

  7. Harbour surveillance with cameras calibrated with AIS data

    NASA Astrophysics Data System (ADS)

    Palmieri, F. A. N.; Castaldo, F.; Marino, G.

    The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.

  8. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  9. Nonlinear CARS measurement of nitrogen vibrational and rotational temperatures behind hypervelocity strong shock wave

    NASA Astrophysics Data System (ADS)

    Osada, Takashi; Endo, Youichi; Kanazawa, Chikara; Ota, Masanori; Maeno, Kazuo

    2009-02-01

    The hypervelocity strong shock waves are generated, when the space vehicles reenter the atmosphere from space. Behind the shock wave radiative and non-equilibrium flow is generated in front of the surface of the space vehicle. Many studies have been reported to investigate the phenomena for the aerospace exploit and reentry. The research information and data on the high temperature flows have been available to the rational heatproof design of the space vehicles. Recent development of measurement techniques with laser systems and photo-electronics now enables us to investigate the hypervelocity phenomena with greatly advanced accuracy. In this research strong shock waves are generated in low-density gas to simulate the reentry range gas flow with a free-piston double-diaphragm shock tube, and CARS (Coherent Anti-stokes Raman Spectroscopy) measurement method is applied to the hypervelocity flows behind the shock waves, where spectral signals of high space/time resolution are acquired. The CARS system consists of YAG and dye lasers, a spectroscope, and a CCD camera system. We obtain the CARS signal spectrum data by this special time-resolving experiment, and the vibrational and rotational temperatures of N2 are determined by fitting between the experimental spectroscopic profile data and theoretically estimated spectroscopic data.

  10. Game theoretic approach for cooperative feature extraction in camera networks

    NASA Astrophysics Data System (ADS)

    Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco

    2016-07-01

    Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.

  11. Evaluation of a stereoscopic camera-based three-dimensional viewing workstation for ophthalmic surgery.

    PubMed

    Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S

    2007-05-01

    To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.

  12. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-002 (4 Dec 1993) --- This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed from inside Endeavour's cabin with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view features the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  13. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-003 (4 Dec 1993) --- This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  14. Enhanced Early View of Ceres from Dawn

    NASA Image and Video Library

    2014-12-05

    As the Dawn spacecraft flies through space toward the dwarf planet Ceres, the unexplored world appears to its camera as a bright light in the distance, full of possibility for scientific discovery. This view was acquired as part of a final calibration of the science camera before Dawn's arrival at Ceres. To accomplish this, the camera needed to take pictures of a target that appears just a few pixels across. On Dec. 1, 2014, Ceres was about nine pixels in diameter, nearly perfect for this calibration. The images provide data on very subtle optical properties of the camera that scientists will use when they analyze and interpret the details of some of the pictures returned from orbit. Ceres is the bright spot in the center of the image. Because the dwarf planet is much brighter than the stars in the background, the camera team selected a long exposure time to make the stars visible. The long exposure made Ceres appear overexposed, and exaggerated its size; this was corrected by superimposing a shorter exposure of the dwarf planet in the center of the image. A cropped, magnified view of Ceres appears in the inset image at lower left. The image was taken on Dec. 1, 2014 with the Dawn spacecraft's framing camera, using a clear spectral filter. Dawn was about 740,000 miles (1.2 million kilometers) from Ceres at the time. Ceres is 590 miles (950 kilometers) across and was discovered in 1801. http://photojournal.jpl.nasa.gov/catalog/PIA19050

  15. IET. Aerial view of project, 95 percent complete. Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Aerial view of project, 95 percent complete. Camera facing east. Left to right: stack, duct, mobile test cell building (TAN-624), four-rail track, dolly. Retaining wall between mobile test building and shielded control building (TAN-620) just beyond. North of control building are tank building (TAN-627) and fuel-transfer pump building (TAN-625). Guard house at upper right along exclusion fence. Construction vehicles and temporary warehouse in view near guard house. Date: June 6, 1955. INEEL negative no. 55-1462 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  16. Teleoperated Marsupial Mobile Sensor Platform Pair for Telepresence Insertion Into Challenging Structures

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.; Greer, Lawrence C.

    2011-01-01

    A platform has been developed for two or more vehicles with one or more residing within the other (a marsupial pair). This configuration consists of a large, versatile robot that is carrying a smaller, more specialized autonomous operating robot(s) and/or mobile repeaters for extended transmission. The larger vehicle, which is equipped with a ramp and/or a robotic arm, is used to operate over a more challenging topography than the smaller one(s) that may have a more limited inspection area to traverse. The intended use of this concept is to facilitate the insertion of a small video camera and sensor platform into a difficult entry area. In a terrestrial application, this may be a bus or a subway car with narrow aisles or steep stairs. The first field-tested configuration is a tracked vehicle bearing a rigid ramp of fixed length and width. A smaller six-wheeled vehicle approximately 10 in. (25 cm) wide by 12 in. (30 cm) long resides at the end of the ramp within the larger vehicle. The ramp extends from the larger vehicle and is tipped up into the air. Using video feedback from a camera atop the larger robot, the operator at a remote location can steer the larger vehicle to the bus door. Once positioned at the door, the operator can switch video feedback to a camera at the end of the ramp to facilitate the mating of the end of the ramp to the top landing at the upper terminus of the steps. The ramp can be lowered by remote control until its end is in contact with the top landing. At the same time, the end of the ramp bearing the smaller vehicle is raised to minimize the angle of the slope the smaller vehicle has to climb, and further gives the operator a better view of the entry to the bus from the smaller vehicle. Control is passed over to the smaller vehicle and, using video feedback from the camera, it is driven up the ramp, turned oblique into the bus, and then sent down the aisle for surveillance. The demonstrated vehicle was used to scale the steps leading to the interior of a bus whose landing is 44 in. (.1.1 m) from the road surface. This vehicle can position the end of its ramp to a surface over 50 in. (.1.3 m) above ground level and can drive over rail heights exceeding 6 in. (.15 cm). Thus configured, this vehicle can conceivably deliver the smaller robot to the end platform of New York City subway cars from between the rails. This innovation is scalable to other formulations for size, mobility, and surveillance functions. Conceivably the larger vehicle can be configured to traverse unstable rubble and debris to transport a smaller search and rescue vehicle as close as possible to the scene of a disaster such as a collapsed building. The smaller vehicle, tethered or otherwise, and capable of penetrating and traversing within the confined spaces in the collapsed structure, can transport imaging and other sensors to look for victims or other targets.

  17. Jig For Stereoscopic Photography

    NASA Technical Reports Server (NTRS)

    Nielsen, David J.

    1990-01-01

    Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.

  18. LOFT complex, camera facing west. Mobile entry (TAN624) is position ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT complex, camera facing west. Mobile entry (TAN-624) is position next to containment building (TAN-650). Shielded roadway entrance in view just below and to right of stack. Borated water tank has been covered with weather shelter and is no longer visible. ANP hangar (TAN-629) in view beyond LOFT. Date: 1974. INEEL negative no. 74-4191 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  19. Various views of STS-95 Senator John Glenn during training

    NASA Image and Video Library

    1998-06-18

    S98-08733 (9 April 1998) --- Looking through the view finder on a camera, U.S. Sen. John H. Glenn Jr. (D.-Ohio) gets a refresher course in photography from a JSC crew trainer (out of frame, right). The STS-95 payload specialist carried a 35mm camera on his historic MA-6 flight over 36 years ago. The photo was taken by Joe McNally, National Geographic, for NASA.

  20. ETR HEAT EXCHANGER BUILDING, TRA644. SOUTH SIDE. CAMERA FACING NORTH. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR HEAT EXCHANGER BUILDING, TRA-644. SOUTH SIDE. CAMERA FACING NORTH. NOTE POURED CONCRETE WALLS. ETR IS AT LEFT OF VIEW. NOTE DRIVEWAY INSET AT RIGHT FORMED BY DEMINERALIZER WING AT RIGHT. SOUTHEAST CORNER OF ETR, TRA-642, IN VIEW AT UPPER LEFT. INL NEGATIVE NO. HD46-36-1. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  1. At Base of 'Burns Cliff'

    NASA Image and Video Library

    2004-11-11

    NASA's Mars Exploration Rover Opportunity captured this view from the base of "Burns Cliff" during the rover's 280th martian day (Nov. 6, 2004). This cliff in the inner wall of "Endurance Crater" displays multiple layers of bedrock for the rover to examine with its panoramic camera and miniature thermal emission spectrometer. The rover team has decided that the farthest Opportunity can safely advance along the base of the cliff is close to the squarish white rock near the center of this image. After examining the site for a few days from that position, the the rover will turn around and head out of the crater. The view is a mosaic of frames taken by Opportunity's navigation camera. The rover was on ground with a slope of about 30 degrees when the pictures were taken, and the view is presented here in a way that corrects for that tilt of the camera. http://photojournal.jpl.nasa.gov/catalog/PIA07039

  2. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  3. 16. VIEW OF ROBERT VOGEL, CURATOR, DIVISION OF MECHANICAL & ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. VIEW OF ROBERT VOGEL, CURATOR, DIVISION OF MECHANICAL & CIVIL ENGINEER, NATIONAL MUSEUM OF AMERICAN HISTORY, SMITHSONIAN INSTITUTION, SITTING IN ELEVATOR CAR. MR. VOGEL IS RESPONSIBLE FOR THE RELOCATION OF THE ELEVATOR TO THE SMITHSONIAN INSTITUTION - 72 Marlborough Street, Residential Hydraulic Elevator, Boston, Suffolk County, MA

  4. 12. Credit PED. View of tail race and dam showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Credit PED. View of tail race and dam showing dumping of construction rubble into river bed by rail car; and preparations for pouring a concrete cap onto tail race wall. Photo c. 1909. - Dam No. 4 Hydroelectric Plant, Potomac River, Martinsburg, Berkeley County, WV

  5. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    ERIC Educational Resources Information Center

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  6. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  7. Bird's-Eye View of Opportunity at 'Erebus' (Polar)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This view combines frames taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity on the rover's 652nd through 663rd Martian days, or sols (Nov. 23 to Dec. 5, 2005), at the edge of 'Erebus Crater.' The mosaic is presented as a polar projection. This type of projection provides a kind of overhead view of all of the surrounding terrain. Opportunity examined targets on the outcrop called 'Rimrock' in front of the rover, testing the mobility and operation of Opportunity's robotic arm. The view shows examples of the dunes and ripples that Opportunity has been crossing as the rover drives on the Meridiani plains.

    This view is an approximate true color rendering composed of images taken through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  8. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  9. Immersive viewing engine

    NASA Astrophysics Data System (ADS)

    Schonlau, William J.

    2006-05-01

    An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.

  10. Morning view, contextual view of the exterior west side of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Morning view, contextual view of the exterior west side of the north wall along the unpaved road; camera facing west, positioned in road approximately 8 posts west of the gate. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  11. Mesoscale Lightning Experiment (MLE): A view of lightning as seen from space during the STS-26 mission

    NASA Technical Reports Server (NTRS)

    Vaughan, O. H., Jr.

    1990-01-01

    Information on the data obtained from the Mesoscale Lightning Experiment flown on STS-26 is provided. The experiment used onboard TV cameras and a 35 mm film camera to obtain data. Data from the 35 mm camera are presented. During the mission, the crew had difficulty locating the various targets of opportunity with the TV cameras. To obtain as much data as possible in the short observational timeline allowed due to other commitments, the crew opted to use the hand-held 35 mm camera.

  12. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  13. Spheres of Earth: An Introduction to Making Observations of Earth Using an Earth System's Science Approach. Student Guide

    NASA Technical Reports Server (NTRS)

    Graff, Paige Valderrama; Baker, Marshalyn (Editor); Graff, Trevor (Editor); Lindgren, Charlie (Editor); Mailhot, Michele (Editor); McCollum, Tim (Editor); Runco, Susan (Editor); Stefanov, William (Editor); Willis, Kim (Editor)

    2010-01-01

    Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA's Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA s Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (approx.185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. There are four major systems or spheres of Earth. They are: Atmosphere, Biosphere, Hydrosphe, and Litho/Geosphere.

  14. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.

  15. Is this car looking at you? How anthropomorphism predicts fusiform face area activation when seeing cars.

    PubMed

    Kühn, Simone; Brick, Timothy R; Müller, Barbara C N; Gallinat, Jürgen

    2014-01-01

    Anthropomorphism encompasses the attribution of human characteristics to non-living objects. In particular the human tendency to see faces in cars has long been noticed, yet its neural correlates are unknown. We set out to investigate whether the fusiform face area (FFA) is associated with seeing human features in car fronts, or whether, the higher-level theory of mind network (ToM), namely temporoparietal junction (TPJ) and medial prefrontal cortex (MPFC) show a link to anthropomorphism. Twenty participants underwent fMRI scanning during a passive car-front viewing task. We extracted brain activity from FFA, TPJ and MPFC. After the fMRI session participants were asked to spontaneously list adjectives that characterize each car front. Five raters judged the degree to which each adjective can be applied as a characteristic of human beings. By means of linear mixed models we found that the implicit tendency to anthropomorphize individual car fronts predicts FFA, but not TPJ or MPFC activity. The results point to an important role of FFA in the phenomenon of ascribing human attributes to non-living objects. Interestingly, brain regions that have been associated with thinking about beliefs and mental states of others (TPJ, MPFC) do not seem to be related to anthropomorphism of car fronts.

  16. Is This Car Looking at You? How Anthropomorphism Predicts Fusiform Face Area Activation when Seeing Cars

    PubMed Central

    Kühn, Simone; Brick, Timothy R.; Müller, Barbara C. N.; Gallinat, Jürgen

    2014-01-01

    Anthropomorphism encompasses the attribution of human characteristics to non-living objects. In particular the human tendency to see faces in cars has long been noticed, yet its neural correlates are unknown. We set out to investigate whether the fusiform face area (FFA) is associated with seeing human features in car fronts, or whether, the higher-level theory of mind network (ToM), namely temporoparietal junction (TPJ) and medial prefrontal cortex (MPFC) show a link to anthropomorphism. Twenty participants underwent fMRI scanning during a passive car-front viewing task. We extracted brain activity from FFA, TPJ and MPFC. After the fMRI session participants were asked to spontaneously list adjectives that characterize each car front. Five raters judged the degree to which each adjective can be applied as a characteristic of human beings. By means of linear mixed models we found that the implicit tendency to anthropomorphize individual car fronts predicts FFA, but not TPJ or MPFC activity. The results point to an important role of FFA in the phenomenon of ascribing human attributes to non-living objects. Interestingly, brain regions that have been associated with thinking about beliefs and mental states of others (TPJ, MPFC) do not seem to be related to anthropomorphism of car fronts. PMID:25517511

  17. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  18. Sensor fusion of range and reflectance data for outdoor scene analysis

    NASA Technical Reports Server (NTRS)

    Kweon, In SO; Hebvert, Martial; Kanade, Takeo

    1988-01-01

    In recognizing objects in an outdoor scene, range and reflectance (or color) data provide complementary information. Results of experiments in recognizing outdoor scenes containing roads, trees, and cars are presented. The recognition program uses range and reflectance data obtained by a scanning laser range finder, as well as color data from a color TV camera. After segmentation of each image into primitive regions, models of objects are matched using various properties.

  19. NDT of railway components using induction thermography

    NASA Astrophysics Data System (ADS)

    Netzelmann, U.; Walle, G.; Ehlen, A.; Lugin, S.; Finckbohner, M.; Bessert, S.

    2016-02-01

    Induction or eddy current thermography is used to detect surface cracks in ferritic steel. The technique is applied to detect surface cracks in rails from a moving test car. Cracks were detected at a train speed between 2 and 15 km/h. An automated demonstrator system for testing railway wheels after production is described. While the wheel is rotated, a robot guides the detection unit consisting of inductor and infrared camera over the surface.

  20. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  1. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  2. 3. VIEW OF THE PROCESSING PLANT TO THE WEST. THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VIEW OF THE PROCESSING PLANT TO THE WEST. THE PROCESSING PLANT IS LEFT CENTER. THE BLACKSMITH/MINE CAR REPAIR SHOP AND PARTS/SUPPLIES BUILDINGS ARE RIGHT CENTER. - Smith Mine, Bear Creek 1.5 miles West of Town of Bear Creek, Red Lodge, Carbon County, MT

  3. 50. View looking east down rail rightofway traveled by ladle ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    50. View looking east down rail right-of-way traveled by ladle car taking molten iron from furnaces to pig machine; gas main at left, Babcock & Wilcox boilers at right. - Sloss-Sheffield Steel & Iron, First Avenue North Viaduct at Thirty-second Street, Birmingham, Jefferson County, AL

  4. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  5. Coherent anti-Stokes Raman scattering rigid endoscope toward robot-assisted surgery.

    PubMed

    Hirose, K; Aoki, T; Furukawa, T; Fukushima, S; Niioka, H; Deguchi, S; Hashimoto, M

    2018-02-01

    Label-free visualization of nerves and nervous plexuses will improve the preservation of neurological functions in nerve-sparing robot-assisted surgery. We have developed a coherent anti-Stokes Raman scattering (CARS) rigid endoscope to distinguish nerves from other tissues during surgery. The developed endoscope, which has a tube with a diameter of 12 mm and a length of 270 mm, achieved 0.91% image distortion and 8.6% non-uniformity of CARS intensity in the whole field of view (650 μm diameter). We demonstrated CARS imaging of a rat sciatic nerve and visualization of the fine structure of nerve fibers.

  6. Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granstedt, E. M., E-mail: egranstedt@trialphaenergy.com; Petrov, P.; Knapp, K.

    2016-11-15

    The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport.more » For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.« less

  7. Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device

    NASA Astrophysics Data System (ADS)

    Granstedt, E. M.; Petrov, P.; Knapp, K.; Cordero, M.; Patel, V.

    2016-11-01

    The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport. For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.

  8. Characterization and optimization for detector systems of IGRINS

    NASA Astrophysics Data System (ADS)

    Jeong, Ueejeong; Chun, Moo-Young; Oh, Jae Sok; Park, Chan; Yuk, In-Soo; Oh, Heeyoung; Kim, Kang-Min; Ko, Kyeong Yeon; Pavel, Michael D.; Yu, Young Sam; Jaffe, Daniel T.

    2014-07-01

    IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by the Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). This spectrograph has H-band and K-band science cameras and a slit viewing camera, all three of which use Teledyne's λc~2.5μm 2k×2k HgCdTe HAWAII-2RG CMOS detectors. The two spectrograph cameras employ science grade detectors, while the slit viewing camera includes an engineering grade detector. Teledyne's cryogenic SIDECAR ASIC boards and JADE2 USB interface cards were installed to control those detectors. We performed experiments to characterize and optimize the detector systems in the IGRINS cryostat. We present measurements and optimization of noise, dark current, and referencelevel stability obtained under dark conditions. We also discuss well depth, linearity and conversion gain measurements obtained using an external light source.

  9. Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array

    NASA Astrophysics Data System (ADS)

    Houben, Sebastian

    2015-03-01

    The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.

  10. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  11. New Optics See More With Less

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy

    2015-01-01

    NASA offers companies an optical system that provides a unique panoramic perspective with a single camera. NASA's Marshall Space Flight Center has developed a technology that combines a panoramic refracting optic (PRO) lens with a unique detection system to acquire a true 360-degree field of view. Although current imaging systems can acquire panoramic images, they must use up to five cameras to obtain the full field of view. MSFC's technology obtains its panoramic images from one vantage point.

  12. ATTICA family of thermal cameras in submarine applications

    NASA Astrophysics Data System (ADS)

    Kuerbitz, Gunther; Fritze, Joerg; Hoefft, Jens-Rainer; Ruf, Berthold

    2001-10-01

    Optronics Mast Systems (US: Photonics Mast Systems) are electro-optical devices which enable a submarine crew to observe the scenery above water during dive. Unlike classical submarine periscopes they are non-hull-penetrating and therefore have no direct viewing capability. Typically they have electro-optical cameras both for the visual and for an IR spectral band with panoramic view and a stabilized line of sight. They can optionally be equipped with laser range- finders, antennas, etc. The brand name ATTICA (Advanced Two- dimensional Thermal Imager with CMOS-Array) characterizes a family of thermal cameras using focal-plane-array (FPA) detectors which can be tailored to a variety of requirements. The modular design of the ATTICA components allows the use of various detectors (InSb, CMT 3...5 μm , CMT 7...11 μm ) for specific applications. By means of a microscanner ATTICA cameras achieve full standard TV resolution using detectors with only 288 X 384 (US:240 X 320) detector elements. A typical requirement for Optronics-Mast Systems is a Quick- Look-Around capability. For FPA cameras this implies the need for a 'descan' module which can be incorporated in the ATTICA cameras without complications.

  13. Radiometric stability of the Multi-angle Imaging SpectroRadiometer (MISR) following 15 years on-orbit

    NASA Astrophysics Data System (ADS)

    Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu

    2014-09-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.

  14. Mobile in vivo camera robots provide sole visual feedback for abdominal exploration and cholecystectomy.

    PubMed

    Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D

    2006-01-01

    The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.

  15. Determination of vehicle density from traffic images at day and nighttime

    NASA Astrophysics Data System (ADS)

    Mehrübeoğlu, Mehrübe; McLauchlan, Lifford

    2007-02-01

    In this paper we extend our previous work to address vehicle differentiation in traffic density computations1. The main goal of this work is to create vehicle density history for given roads under different weather or light conditions and at different times of the day. Vehicle differentiation is important to account for connected or otherwise long vehicles, such as trucks or tankers, which lead to over-counting with the original algorithm. Average vehicle size in pixels, given the magnification within the field of view for a particular camera, is used to separate regular cars and long vehicles. A separate algorithm and procedure have been developed to determine traffic density after dark when the vehicle headlights are turned on. Nighttime vehicle recognition utilizes blob analysis based on head/taillight images. The high intensity of vehicle lights are identified in binary images for nighttime vehicle detection. The stationary traffic image frames are downloaded from the internet as they are updated. The procedures are implemented in MATLAB. The results of both nighttime traffic density and daytime long vehicle identification algorithms are described in this paper. The determination of nighttime traffic density, and identification of long vehicles at daytime are improvements over the original work1.

  16. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  17. Cognitive Mapping Based on Conjunctive Representations of Space and Movement

    PubMed Central

    Zeng, Taiping; Si, Bailu

    2017-01-01

    It is a challenge to build robust simultaneous localization and mapping (SLAM) system in dynamical large-scale environments. Inspired by recent findings in the entorhinal–hippocampal neuronal circuits, we propose a cognitive mapping model that includes continuous attractor networks of head-direction cells and conjunctive grid cells to integrate velocity information by conjunctive encodings of space and movement. Visual inputs from the local view cells in the model provide feedback cues to correct drifting errors of the attractors caused by the noisy velocity inputs. We demonstrate the mapping performance of the proposed cognitive mapping model on an open-source dataset of 66 km car journey in a 3 km × 1.6 km urban area. Experimental results show that the proposed model is robust in building a coherent semi-metric topological map of the entire urban area using a monocular camera, even though the image inputs contain various changes caused by different light conditions and terrains. The results in this study could inspire both neuroscience and robotic research to better understand the neural computational mechanisms of spatial cognition and to build robust robotic navigation systems in large-scale environments. PMID:29213234

  18. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  19. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    NASA Astrophysics Data System (ADS)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  20. Morning view, contextual view showing the role of the brick ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Morning view, contextual view showing the role of the brick walls along the boundary of the cemetery; interior view taken from midway down the paved west road with the camera facing west to capture the morning light on the west wall. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  1. Clinical trials of the prototype Rutherford Appleton Laboratory MWPC positron camera at the Royal Marsden Hospital

    NASA Astrophysics Data System (ADS)

    Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.

    1988-06-01

    Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.

  2. Phoenix Lander on Mars with Surrounding Terrain, Vertical Projection

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This view is a vertical projection that combines more than 500 exposures taken by the Surface Stereo Imager camera on NASA's Mars Phoenix Lander and projects them as if looking down from above.

    The black circle on the spacecraft is where the camera itself is mounted on the lander, out of view in images taken by the camera. North is toward the top of the image. The height of the lander's meteorology mast, extending toward the southwest, appears exaggerated because that mast is taller than the camera mast.

    This view in approximately true color covers an area about 30 meters by 30 meters (about 100 feet by 100 feet). The landing site is at 68.22 degrees north latitude, 234.25 degrees east longitude on Mars.

    The ground surface around the lander has polygonal patterning similar to patterns in permafrost areas on Earth.

    This view comprises more than 100 different Stereo Surface Imager pointings, with images taken through three different filters at each pointing. The images were taken throughout the period from the 13th Martian day, or sol, after landing to the 47th sol (June 5 through July 12, 2008). The lander's Robotic Arm is cut off in this mosaic view because component images were taken when the arm was out of the frame.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  3. In this medium close-up view, captured by an Electronic Still Camera (ESC), the Spartan 207

    NASA Technical Reports Server (NTRS)

    1996-01-01

    STS-77 ESC VIEW --- In this medium close-up view, captured by an Electronic Still Camera (ESC), the Spartan 207 free-flyer is held in the grasp of the Space Shuttle Endeavour's Remote Manipulator System (RMS) following its re-capture on May 21, 1996. The six-member crew has spent a portion of the early stages of the mission in various activities involving the Spartan 207 and the related Inflatable Antenna Experiment (IAE). The Spartan project is managed by NASA's Goddard Space Flight Center (GSFC) for NASA's Office of Space Science, Washington, D.C. GMT: 09:38:05.

  4. False-Color Image of an Impact Crater on Vesta

    NASA Image and Video Library

    2011-08-24

    NASA Dawn spacecraft obtained this false-color image right of an impact crater in asteroid Vesta equatorial region with its framing camera on July 25, 2011. The view on the left is from the camera clear filter.

  5. Illustrating MastCam Capabilities with a Terrestrial Scene

    NASA Image and Video Library

    2011-11-28

    This set of views illustrates capabilities of the Mast Camera MastCam instrument on NASA Mars Science Laboratory Curiosity rover, using a scene on Earth as an example of what MastCam two cameras can see from different distances.

  6. Robust human detection, tracking, and recognition in crowded urban areas

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.

  7. Design and modeling of a prototype fiber scanning CARS endoscope

    NASA Astrophysics Data System (ADS)

    Veilleux, Isra"l.; Doucet, Michel; Coté, Patrice; Verreault, Sonia; Fortin, Michel; Paradis, Patrick; Leclair, Sébastien; Da Costa, Ralph S.; Wilson, Brian C.; Seibel, Eric; Mermut, Ozzy; Cormier, Jean-François

    2010-02-01

    An endoscope capable of Coherent Anti-Stokes Raman scattering (CARS) imaging would be of significant clinical value for improving early detection of endoluminal cancers. However, developing this technology is challenging for many reasons. First, nonlinear imaging techniques such as CARS are single point measurements thus requiring fast scanning in a small footprint if video rate is to be achieved. Moreover, the intrinsic nonlinearity of this modality imposes several technical constraints and limitations, mainly related to pulse and beam distortions that occur within the optical fiber and the focusing objective. Here, we describe the design and report modeling results of a new CARS endoscope. The miniature microscope objective design and its anticipated performance are presented, along with its compatibility with a new spiral scanningfiber imaging technology developed at the University of Washington. This technology has ideal attributes for clinical use, with its small footprint, adjustable field-of-view and high spatial-resolution. This compact hybrid fiber-based endoscopic CARS imaging design is anticipated to have a wide clinical applicability.

  8. Nitrogen dioxide in exhaust emissions from motor vehicles

    NASA Astrophysics Data System (ADS)

    Lenner, Magnus

    NO 2/NO x (v/v) fractions and NO 2 exhaust emission rates were determined for diesel- and gasoline-powered passenger cars and a diesel truck, at several conditions of constant engine load and speed. Vehicles with various kinds of emission control equipment were investigated. Also, integrations of NO 2/NO x percentages during Federal Test Procedure driving cycles were made for six types of passenger car. High (> 30 %) NO 2 fractions were measured for gasoline cars with air injection, and for diesel vehicles. A gasoline car with a 3-way catalyst had low NO x totals with small (< 1 %) NO 2 fractions. A passenger diesel with particle trap yielded surprisingly small (0-2%) NO 2 fractions at moderate speeds. The results have implications for NO 2 concentration in the atmosphere of northern cities during wintertime inversions, in view of the increasing use of air injection systems for passenger cars to meet legal restrictions on vehicle emissions of hydrocarbons and CO.

  9. A Cloud-Based Car Parking Middleware for IoT-Based Smart Cities: Design and Implementation

    PubMed Central

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhao, Li; Zhang, Xueji

    2014-01-01

    This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT) paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide ‘best’ car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S) paradigm. PMID:25429416

  10. A cloud-based car parking middleware for IoT-based smart cities: design and implementation.

    PubMed

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhao, Li; Zhang, Xueji

    2014-11-25

    This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT) paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide 'best' car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S) paradigm.

  11. Autonomous Cars: In Favor of a Mandatory Ethics Setting.

    PubMed

    Gogoll, Jan; Müller, Julian F

    2017-06-01

    The recent progress in the development of autonomous cars has seen ethical questions come to the forefront. In particular, life and death decisions regarding the behavior of self-driving cars in trolley dilemma situations are attracting widespread interest in the recent debate. In this essay we want to ask whether we should implement a mandatory ethics setting (MES) for the whole of society or, whether every driver should have the choice to select his own personal ethics setting (PES). While the consensus view seems to be that people would not be willing to use an automated car that might sacrifice themselves in a dilemma situation, we will defend the somewhat contra-intuitive claim that this would be nevertheless in their best interest. The reason is, simply put, that a PES regime would most likely result in a prisoner's dilemma.

  12. Hidden Cameras: Everything You Need to Know About Covert Recording, Undercover Cameras and Secret Filming Plomin Joe Hidden Cameras: Everything You Need to Know About Covert Recording, Undercover Cameras and Secret Filming 224pp £12.99 Jessica Kingsley 9781849056434 1849056439 [Formula: see text].

    PubMed

    2016-05-27

    LAST YEAR, the Care Quality Commission issued guidance to families on using hidden cameras if they are concerned that their relatives are being abused or receiving poor care. Filming in care settings has also resulted in high profile prosecutions, and numerous TV documentaries. Joe Plomin, the author, was the undercover producer who exposed the abuse at Winterbourne View, near Bristol, in 2011.

  13. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  14. Single lens 3D-camera with extended depth-of-field

    NASA Astrophysics Data System (ADS)

    Perwaß, Christian; Wietzke, Lennart

    2012-03-01

    Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

  15. Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System

    NASA Astrophysics Data System (ADS)

    Stebner, K.; Wieden, A.

    2014-03-01

    Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.

  16. View of Jack Lousma's hands using silverware to gather food at food station

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A close-up view of Skylab 3 pilot Jack Lousma's hands using a silverware utensil to gather food at the food station, in this photographic reproduction taken from a television transmission made by a color TV camera aboard the Skylab space station in Earth orbit. Astronaut Alan L. Bean, commander, had just zoomed the TV camera in for this closeup of the food tray following a series of wide shots of Lousma at the food station.

  17. ETR COMPLEX. CAMERA FACING SOUTH. FROM BOTTOM OF VIEW TO ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR COMPLEX. CAMERA FACING SOUTH. FROM BOTTOM OF VIEW TO TOP: MTR, MTR SERVICE BUILDING, ETR CRITICAL FACILITY, ETR CONTROL BUILDING (ATTACHED TO ETR), ETR BUILDING (HIGH-BAY), COMPRESSOR BUILDING (ATTACHED AT LEFT OF ETR), HEAT EXCHANGER BUILDING (JUST BEYOND COMPRESSOR BUILDING), COOLING TOWER PUMP HOUSE, COOLING TOWER. OTHER BUILDINGS ARE CONTRACTORS' CONSTRUCTION BUILDINGS. INL NEGATIVE NO. 56-4105. Unknown Photographer, ca. 1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  18. Indirect Vision Driving with Fixed Flat Panel Displays for Near Unity, Wide, and Extended Fields of Camera View

    DTIC Science & Technology

    2001-06-01

    The choice of camera FOV may depend on the task being performed. The driver may prefer a unity view for driving along a known route to...increase his or her perception of potential road hazards. On the other hand, the driver may prefer a compressed image at road turns for route selection...with a supervisory evaluation of the road ahead and the impact on the driving schema. Included in this

  19. Precise 3D Lug Pose Detection Sensor for Automatic Robot Welding Using a Structured-Light Vision System

    PubMed Central

    Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae

    2009-01-01

    In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007

  20. Twelve Months in Two Minutes Curiositys First Year on Mars

    NASA Image and Video Library

    2013-08-01

    A series of 548 images shows the view from a fisheye camera on the front of NASA's Mars rover Curiosity from the day the rover landed in August 2012 through July 2013. The camera is the rover's front Hazard-Avoidance Camera. The scenes include Curiosity collecting its first scoops of Martian soil and collecting a drilled sample form inside a Martian rock.

  1. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  2. 103. VIEW OF BEACH STRUCTURES ON NORTHWEST SIDE OF PIER, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    103. VIEW OF BEACH STRUCTURES ON NORTHWEST SIDE OF PIER, LOOKING SOUTHEAST; PACIFIC ELECTRIC RAILWAY CAR (UPPER LEFT), CONCESSION STANDS (LOWER LEFT), BANDSHELL (RIGHT), AND PIER IN BACKGROUND Photograph #5352-HB. Photographer unknown, c. 1914 - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  3. 63. VIEW LOOKING DOWN VAL LAUNCHING SLAB SHOWING DRIVE GEARS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    63. VIEW LOOKING DOWN VAL LAUNCHING SLAB SHOWING DRIVE GEARS, CABLES, LAUNCHER RAILS, PROJECTILE CAR AND SUPPORT CARRIAGE, April 8, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  4. CONTEXT VIEW ALONG EXISTING PERIMETER TRACKS LOOKING OVER IRON ORE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    CONTEXT VIEW ALONG EXISTING PERIMETER TRACKS LOOKING OVER IRON ORE CARS TOWARDS WESTERN SIDE OF CLEVELAND BULK TERMINAL BUILDINGS AND A SELF-UNLOADING IRON ORE SHIP AT DOCK. LOOKING SOUTHWEST. - Pennsylvania Railway Ore Dock, Lake Erie at Whiskey Island, approximately 1.5 miles west of Public Square, Cleveland, Cuyahoga County, OH

  5. 2. INTERIOR VIEW, LOOKING EAST, OF TRACK TWO WITH SUBASSEMBLY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. INTERIOR VIEW, LOOKING EAST, OF TRACK TWO WITH SUBASSEMBLY OF END SECTION SHEAR PLATE FOR ROUND-SIDE HOPPER CAR (DESIGNED FOR TRANSPORT OF PLASTIC PELLETS FOR PETROCHEMICAL INDUSTRY) AND DEXTER WALTON, CLASS A WELDER. - Pullman Standard Company Plant, Fabrication Assembly Shop, 401 North Twenty-fourth Street, Bessemer, Jefferson County, AL

  6. Clementine Images of Earth and Moon

    NASA Technical Reports Server (NTRS)

    1997-01-01

    During its flight and lunar orbit, the Clementine spacecraft returned images of the planet Earth and the Moon. This collection of UVVIS camera Clementine images shows the Earth from the Moon and 3 images of the Earth.

    The image on the left shows the Earth as seen across the lunar north pole; the large crater in the foreground is Plaskett. The Earth actually appeared about twice as far above the lunar horizon as shown. The top right image shows the Earth as viewed by the UVVIS camera while Clementine was in transit to the Moon; swirling white cloud patterns indicate storms. The two views of southeastern Africa were acquired by the UVVIS camera while Clementine was in low Earth orbit early in the mission

  7. Driving into the future: how imaging technology is shaping the future of cars

    NASA Astrophysics Data System (ADS)

    Zhang, Buyue

    2015-03-01

    Fueled by the development of advanced driver assistance system (ADAS), autonomous vehicles, and the proliferation of cameras and sensors, automotive is becoming a rich new domain for innovations in imaging technology. This paper presents an overview of ADAS, the important imaging and computer vision problems to solve for automotive, and examples of how some of these problems are solved, through which we highlight the challenges and opportunities in the automotive imaging space.

  8. Tenth Anniversary Image from Camera on NASA Mars Orbiter

    NASA Image and Video Library

    2012-02-29

    NASA Mars Odyssey spacecraft captured this image on Feb. 19, 2012, 10 years to the day after the camera recorded its first view of Mars. This image covers an area in the Nepenthes Mensae region north of the Martian equator.

  9. Impact Site: Cassini's Final Image

    NASA Image and Video Library

    2017-09-15

    This monochrome view is the last image taken by the imaging cameras on NASA's Cassini spacecraft. It looks toward the planet's night side, lit by reflected light from the rings, and shows the location at which the spacecraft would enter the planet's atmosphere hours later. A natural color view, created using images taken with red, green and blue spectral filters, is also provided (Figure 1). The imaging cameras obtained this view at approximately the same time that Cassini's visual and infrared mapping spectrometer made its own observations of the impact area in the thermal infrared. This location -- the site of Cassini's atmospheric entry -- was at this time on the night side of the planet, but would rotate into daylight by the time Cassini made its final dive into Saturn's upper atmosphere, ending its remarkable 13-year exploration of Saturn. The view was acquired on Sept. 14, 2017 at 19:59 UTC (spacecraft event time). The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 394,000 miles (634,000 kilometers) from Saturn. Image scale is about 11 miles (17 kilometers). The original image has a size of 512x512 pixels. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21895

  10. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  11. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  12. "He's the Number One Thing in My World": Application of the PRECEDE-PROCEED Model to Explore Child Car Seat Use in a Regional Community in New South Wales.

    PubMed

    Hunter, Kate; Keay, Lisa; Clapham, Kathleen; Brown, Julie; Bilston, Lynne E; Lyford, Marilyn; Gilbert, Celeste; Ivers, Rebecca Q

    2017-10-10

    We explored the factors influencing the use of age-appropriate car seats in a community with a high proportion of Aboriginal families in regional New South Wales. We conducted a survey and three focus groups with parents of children aged 3-5 years enrolled at three early learning centres on the Australian south-east coast. Survey data were triangulated with qualitative data from focus groups and analysed using the PRECEDE-PROCEED conceptual framework. Of the 133 eligible families, 97 (73%) parents completed the survey including 31% of parents who reported their children were Aboriginal. Use of age-appropriate car seats was reported by 80 (83%) of the participants, and awareness of the child car seat legislation was high (91/97, 94%). Children aged 2-3 years were less likely reported to be restrained in an age-appropriate car seat than were older children aged 4-5 years (60% versus 95%: χ² = 19.14, p < 0.001). Focus group participants highlighted how important their child's safety was to them, spoke of the influence grandparents had on their use of child car seats and voiced mixed views on the value of authorised child car seat fitters. Future programs should include access to affordable car seats and target community members as well as parents with clear, consistent messages highlighting the safety benefits of using age-appropriate car seats.

  13. “He’s the Number One Thing in My World”: Application of the PRECEDE-PROCEED Model to Explore Child Car Seat Use in a Regional Community in New South Wales

    PubMed Central

    Hunter, Kate; Keay, Lisa; Clapham, Kathleen; Brown, Julie; Lyford, Marilyn; Gilbert, Celeste; Ivers, Rebecca Q.

    2017-01-01

    We explored the factors influencing the use of age-appropriate car seats in a community with a high proportion of Aboriginal families in regional New South Wales. We conducted a survey and three focus groups with parents of children aged 3–5 years enrolled at three early learning centres on the Australian south-east coast. Survey data were triangulated with qualitative data from focus groups and analysed using the PRECEDE-PROCEED conceptual framework. Of the 133 eligible families, 97 (73%) parents completed the survey including 31% of parents who reported their children were Aboriginal. Use of age-appropriate car seats was reported by 80 (83%) of the participants, and awareness of the child car seat legislation was high (91/97, 94%). Children aged 2–3 years were less likely reported to be restrained in an age-appropriate car seat than were older children aged 4–5 years (60% versus 95%: χ2 = 19.14, p < 0.001). Focus group participants highlighted how important their child’s safety was to them, spoke of the influence grandparents had on their use of child car seats and voiced mixed views on the value of authorised child car seat fitters. Future programs should include access to affordable car seats and target community members as well as parents with clear, consistent messages highlighting the safety benefits of using age-appropriate car seats. PMID:28994725

  14. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  15. Miranda

    NASA Image and Video Library

    1999-08-24

    One wide-angle and eight narrow-angle camera images of Miranda, taken by NASA Voyager 2, were combined in this view. The controlled mosaic was transformed to an orthographic view centered on the south pole.

  16. Using computer graphics to design Space Station Freedom viewing

    NASA Technical Reports Server (NTRS)

    Goldsberry, B. S.; Lippert, B. O.; Mckee, S. D.; Lewis, J. L., Jr.; Mount, F. E.

    1989-01-01

    An important aspect of planning for Space Station Freedom at the United States National Aeronautics and Space Administration (NASA) is the placement of the viewing windows and cameras for optimum crewmember use. Researchers and analysts are evaluating the placement options using a three-dimensional graphics program called PLAID. This program, developed at the NASA Johnson Space Center (JSC), is being used to determine the extent to which the viewing requirements for assembly and operations are being met. A variety of window placement options in specific modules are assessed for accessibility. In addition, window and camera placements are analyzed to insure that viewing areas are not obstructed by the truss assemblies, externally-mounted payloads, or any other station element. Other factors being examined include anthropometric design considerations, workstation interfaces, structural issues, and mechanical elements.

  17. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  18. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  19. Multi-scale auroral observations in Apatity: winter 2010-2011

    NASA Astrophysics Data System (ADS)

    Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.

    2012-03-01

    Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.

  20. Multi-scale auroral observations in Apatity: winter 2010-2011

    NASA Astrophysics Data System (ADS)

    Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.

    2011-12-01

    Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.

  1. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  2. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  3. View of the extended SSRMS or Canadarm2 with cloudy view in the background

    NASA Image and Video Library

    2003-01-09

    ISS006-E-16947 (9 January 2003) --- The Space Station Remote Manipulator System (SSRMS) or Canadarm2 is pictured over the Bahama Islands in this digital still camera's view taken from the International Space Station (ISS).

  4. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  5. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  6. Camera aboard 'Friendship 7' photographs John Glenn during spaceflight

    NASA Technical Reports Server (NTRS)

    1962-01-01

    A camera aboard the 'Friendship 7' Mercury spacecraft photographs Astronaut John H. Glenn Jr. during the Mercury-Atlas 6 spaceflight (00302-3); Photographs Glenn as he uses a photometer to view the sun during sunsent on the MA-6 space flight (00304).

  7. Contextual view showing northeastern eucalyptus windbreak and portion of citrus ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view showing northeastern eucalyptus windbreak and portion of citrus orchard. Camera facing 118" east-southeast. - Goerlitz House, 9893 Highland Avenue, Rancho Cucamonga, San Bernardino County, CA

  8. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  9. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    PubMed

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  10. Contextual view of Warner's Ranch. Third of three sequential views ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view of Warner's Ranch. Third of three sequential views (from west to east) of the buildings in relation to the surrounding geography. Note approximate location of Overland Trail crossing left to right. Camera facing northeast - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA

  11. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  12. KSC-02pd1374

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  13. KSC-02pd1376

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  14. KSC-02pd1375

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  15. Climate effects of non-compliant Volkswagen diesel cars

    NASA Astrophysics Data System (ADS)

    Tanaka, Katsumasa; Lund, Marianne T.; Aamaas, Borgar; Berntsen, Terje

    2018-04-01

    On-road operations of Volkswagen light-duty diesel vehicles equipped with defeat devices cause emissions of NOx up to 40 times above emission standards. Higher on-road NOx emissions are a widespread problem not limited to Volkswagen vehicles, but the Volkswagen violations brought this issue under the spotlight. While several studies investigated the health impacts of high NOx emissions, the climatic impacts have not been quantified. Here we show that such diesel cars generate a larger warming on the time scale of several years but a smaller warming on the decadal time scale during actual on-road operations than in vehicle certification tests. The difference in longer-term warming levels, however, depends on underlying driving conditions. Furthermore, in the presence of defeat devices, the climatic advantage of ‘clean diesel’ cars over gasoline cars, in terms of global-mean temperature change, is in our view not necessarily the case.

  16. A remote camera at Launch Pad 39B, at the Kennedy Space Center (KSC), recorded this profile view of

    NASA Technical Reports Server (NTRS)

    1996-01-01

    STS-75 LAUNCH VIEW --- A remote camera at Launch Pad 39B, at the Kennedy Space Center (KSC), recorded this profile view of the Space Shuttle Columbia as it cleared the tower to begin the mission. The liftoff occurred on schedule at 3:18:00 p.m. (EST), February 22, 1996. Onboard Columbia for the scheduled two-week mission were astronauts Andrew M. Allen, commander; Scott J. Horowitz, pilot; Franklin R. Chang-Diaz, payload commander; and astronauts Maurizio Cheli, Jeffrey A. Hoffman and Claude Nicollier, along with payload specialist Umberto Guidioni. Cheli and Nicollier represent the European Space Agency (ESA), while Guidioni represents the Italian Space Agency (ASI).

  17. PBF (PER620) interior of Reactor Room. Camera facing south from ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF (PER-620) interior of Reactor Room. Camera facing south from stairway platform in southwest corner (similar to platform in view at left). Reactor was beneath water in circular tank. Fuel was stored in the canal north of it. Platform and apparatus at right is reactor bridge with control rod mechanisms and actuators. The entire apparatus swung over the reactor and pool during operations. Personnel in view are involved with decontamination and preparation of facility for demolition. Note rails near ceiling for crane; motor for rollup door at upper center of view. Date: March 2004. INEEL negative no. HD-41-3-2 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  18. MISR Scans the Texas-Oklahoma Border

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These MISR images of Oklahoma and north Texas were acquired on March 12, 2000 during Terra orbit 1243. The three images on the left, from top to bottom, are from the 70-degree forward viewing camera, the vertical-viewing (nadir) camera, and the 70-degree aftward viewing camera. The higher brightness, bluer tinge, and reduced contrast of the oblique views result primarily from scattering of sunlight in the Earth's atmosphere, though some color and brightness variations are also due to differences in surface reflection at the different angles. The longer slant path through the atmosphere at the oblique angles also accentuates the appearance of thin, high-altitude cirrus clouds.

    On the right, two areas from the nadir camera image are shown in more detail, along with notations highlighting major geographic features. The south bank of the Red River marks the boundary between Texas and Oklahoma. Traversing brush-covered and grassy plains, rolling hills, and prairies, the Red River and the Canadian River are important resources for farming, ranching, public drinking water, hydroelectric power, and recreation. Both originate in New Mexico and flow eastward, their waters eventually discharging into the Mississippi River.

    A smoke plume to the north of the Ouachita Mountains and east of Lake Eufaula is visible in the detailed nadir imagery. The plume is also very obvious at the 70-degree forward view angle, to the right of center and about one-fourth of the way down from the top of the image.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  19. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  20. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

Top