Sample records for camera control unit

  1. Control system for several rotating mirror camera synchronization operation

    NASA Astrophysics Data System (ADS)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  2. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  3. Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture

    DTIC Science & Technology

    2006-07-01

    control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor

  4. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  5. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.

  6. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  7. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  8. Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout

    NASA Technical Reports Server (NTRS)

    Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.

    1997-01-01

    The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.

  9. KSC-04pd1226

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  10. KSC-04pd1220

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  11. KSC-04pd1219

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.

  12. KSC-04pd1227

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  13. The suitability of lightfield camera depth maps for coordinate measurement applications

    NASA Astrophysics Data System (ADS)

    Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael

    2015-12-01

    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.

  14. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    NASA Astrophysics Data System (ADS)

    Kang, Y.-W.; Byun, Y. I.; Rhee, J. H.; Oh, S. H.; Kim, D. K.

    2007-12-01

    We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512), KAF-1602E(1536×1024), KAF-3200E(2184×1472) made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  15. Hand-Held Self-Maneuvering Unit to be used during EVA on Gemini 4

    NASA Image and Video Library

    1965-06-02

    Hand-Held Self-Maneuvering Unit to be used during extravehicular activity (EVA) on Gemini 4 flight. It is an integral unit that contains its own high pressure metering valves and nozzles required to produce controlled thrust. A camera is mounted on the front of the unit.

  16. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  17. KSC-04pd1223

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen makes adjustments on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  18. KSC-04pd1221

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Worthington (left) and Kenny Allen work on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  19. KSC-04pd1225

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen stands in the center console area of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric-drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  20. KSC-04pd1224

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington sits in the center console seat of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  1. KSC-04pd1222

    NASA Image and Video Library

    2004-05-19

    KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Wetherington (left) and Kenny Allen work on two of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.

  2. Very High-Speed Digital Video Capability for In-Flight Use

    NASA Technical Reports Server (NTRS)

    Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald

    2006-01-01

    digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.

  3. AO WFS detector developments at ESO to prepare for the E-ELT

    NASA Astrophysics Data System (ADS)

    Downing, Mark; Casali, Mark; Finger, Gert; Lewis, Steffan; Marchetti, Enrico; Mehrgan, Leander; Ramsay, Suzanne; Reyes, Javier

    2016-07-01

    ESO has a very active on-going AO WFS detector development program to not only meet the needs of the current crop of instruments for the VLT, but also has been actively involved in gathering requirements, planning, and developing detectors and controllers/cameras for the instruments in design and being proposed for the E-ELT. This paper provides an overall summary of the AO WFS Detector requirements of the E-ELT instruments currently in design and telescope focal units. This is followed by a description of the many interesting detector, controller, and camera developments underway at ESO to meet these needs; a) the rationale behind and plan to upgrade the 240x240 pixels, 2000fps, "zero noise", L3Vision CCD220 sensor based AONGC camera; b) status of the LGSD/NGSD High QE, 3e- RoN, fast 700fps, 1760x1680 pixels, Visible CMOS Imager and camera development; c) status of and development plans for the Selex SAPHIRA NIR eAPD and controller. Most of the instruments and detector/camera developments are described in more detail in other papers at this conference.

  4. Multiple-Agent Air/Ground Autonomous Exploration Systems

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.

    2007-01-01

    Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.

  5. Recent developments with the Mars Observer Camera graphite/epoxy structure

    NASA Astrophysics Data System (ADS)

    Telkamp, Arthur R.

    1992-09-01

    The Mars Observer Camera (MOC) is one of the instruments aboard the Mars Observer Spacecraft to be launched not later than September 1992, whose mission is to geologically and climatologically map the Martian surface and atmosphere over a period of one Martian year. This paper discusses the events in the development of MOC that took place in the past two years, with special attention given to the implementation of thermal blankets, shields, and thermal control paints to limit solar absorption while controlling stray light; vibration testing of Flight Unit No.1; and thermal expansion testing. Results are presented of thermal-vac testing Flight Unit No. 1. It was found that, although the temperature profiles were as predicted, the thermally-induced focus displacements were not.

  6. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  7. Rugged Video System For Inspecting Animal Burrows

    NASA Technical Reports Server (NTRS)

    Triandafils, Dick; Maples, Art; Breininger, Dave

    1992-01-01

    Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.

  8. United States Homeland Security and National Biometric Identification

    DTIC Science & Technology

    2002-04-09

    security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are

  9. 78 FR 60248 - Order Denying Export Privileges

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-01

    ... DEPARTMENT OF COMMERCE Bureau of Industry and Security Order Denying Export Privileges In the... commit an offense against the United States, that is, to willfully export from the United States to Belarus export-controlled items, including but not limited to L-3 x200xp Handheld Thermal Imaging Cameras...

  10. System Configuration and Operation Plan of Hayabusa2 DCAM3-D Camera System for Scientific Observation During SCI Impact Experiment

    NASA Astrophysics Data System (ADS)

    Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime

    2017-07-01

    An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.

  11. Solar Extreme Ultraviolet Rocket Telesope Spectrograph ** SERTS ** Detector and Electronics subsystems

    NASA Astrophysics Data System (ADS)

    Payne, L.; Haas, J. P.; Linard, D.; White, L.

    1997-12-01

    The Laboratory for Astronomy and Solar Physics at Goddard Space Flight Center uses a variety imaging sensors for its instrumentation programs. This paper describes the detector system for SERTS. The SERTS rocket telescope uses an open faceplate, single plate MCP tube as the primary detector for EUV spectra from the Sun. The optical output of this detector is fiber-optically coupled to a cooled, large format CCD. This CCD is operated using a software controlled Camera controller based upon a design used for the SOHO/CDS mission. This camera is a general purpose design, with a topology that supports multiple types of imaging devices. Multiport devices (up to 4 ports) and multiphase clocks are supportable as well as variable speed operation. Clock speeds from 100KHz to 1MHz have been used, and the topology is currently being extended to support 10MHz operation. The form factor for the camera system is based on the popular VME buss. Because the tube is an open faceplate design, the detector system has an assortment of vacuum doors and plumbing to allow operation in vacuum but provide for safe storage at normal atmosphere. Vac-ion pumps (3) are used to maintain working vacuum at all times. Marshall Space Flight Center provided the SERTS programs with HVPS units for both the vac-ion pumps and the MCP tube. The MCP tube HVPS is a direct derivative of the design used for the SXI mission for NOAA. Auxiliary equipment includes a frame buffer that works either as a multi-frame storage unit or as a photon counting accumulation unit. This unit also performs interface buffering so that the camera may appear as a piece of GPIB instrumentation.

  12. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  13. 15 CFR 740.16 - Additional permissive reexports (APR).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... North Korea and the commodity being reexported is controlled for national security reasons. (b..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom if: (i) Such cameras are fully...

  14. 15 CFR 740.16 - Additional permissive reexports (APR).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... North Korea and the commodity being reexported is controlled for national security reasons. (b..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom if: (i) Such cameras are fully...

  15. LCOGT Imaging Lab

    NASA Astrophysics Data System (ADS)

    Tufts, Joseph R.; Lobdill, Rich; Haldeman, Benjamin J.; Haynes, Rachel; Hawkins, Eric; Burleson, Ben; Jahng, David

    2008-07-01

    The Las Cumbres Observatory Global Telescope Network (LCOGT) is an ambitious project to build and operate, within 5 years, a worldwide robotic network of 50 0.4, 1, and 2 m telescopes sharing identical instrumentation and optimized for precision photometry of time-varying sources. The telescopes, instrumentation, and software are all developed in house with two 2 m telescopes already installed. The LCOGT Imaging Lab is responsible for assembly and characterization of the network's cameras and instrumentation. In addition to a fully equipped CNC machine shop, two electronics labs, and a future optics lab, the Imaging Lab is designed from the ground up to be a superb environment for bare detectors, precision filters, and assembled instruments. At the heart of the lab is an ISO class 5 cleanroom with full ionization. Surrounding this, the class 7 main lab houses equipment for detector characterization including QE and CTE, and equipment for measuring transmission and reflection of optics. Although the first science cameras installed, two TEC cooled e2v 42-40 deep depletion based units and two CryoTiger cooled Fairchild Imaging CCD486-BI based units, are from outside manufacturers, their 18 position filter wheels and the remainder of the network's science cameras, controllers, and instrumentation will be built in house. Currently being designed, the first generation LCOGT cameras for the network's 1 m telescopes use existing CCD486-BI devices and an in-house controller. Additionally, the controller uses digital signal processing to optimize readout noise vs. speed, and all instrumentation uses embedded microprocessors for communication over ethernet.

  16. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.

    PubMed

    Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.

  17. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research

    PubMed Central

    Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René

    2017-01-01

    The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444

  18. 3D vision upgrade kit for TALON robot

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  19. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  20. Optical Indoor Positioning System Based on TFT Technology.

    PubMed

    Gőzse, István

    2015-12-24

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.

  1. The guidance methodology of a new automatic guided laser theodolite system

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua

    2008-12-01

    Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.

  2. Airport Remote Tower Sensor Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Gawdiak, Yuri; Leidichj, Christopher; Papasin, Richard; Tran, Peter B.; Bass, Kevin

    2006-01-01

    Networks of video cameras, meteorological sensors, and ancillary electronic equipment are under development in collaboration among NASA Ames Research Center, the Federal Aviation Administration (FAA), and the National Oceanic Atmospheric Administration (NOAA). These networks are to be established at and near airports to provide real-time information on local weather conditions that affect aircraft approaches and landings. The prototype network is an airport-approach-zone camera system (AAZCS), which has been deployed at San Francisco International Airport (SFO) and San Carlos Airport (SQL). The AAZCS includes remotely controlled color video cameras located on top of SFO and SQL air-traffic control towers. The cameras are controlled by the NOAA Center Weather Service Unit located at the Oakland Air Route Traffic Control Center and are accessible via a secure Web site. The AAZCS cameras can be zoomed and can be panned and tilted to cover a field of view 220 wide. The NOAA observer can see the sky condition as it is changing, thereby making possible a real-time evaluation of the conditions along the approach zones of SFO and SQL. The next-generation network, denoted a remote tower sensor system (RTSS), will soon be deployed at the Half Moon Bay Airport and a version of it will eventually be deployed at Los Angeles International Airport. In addition to remote control of video cameras via secure Web links, the RTSS offers realtime weather observations, remote sensing, portability, and a capability for deployment at remote and uninhabited sites. The RTSS can be used at airports that lack control towers, as well as at major airport hubs, to provide synthetic augmentation of vision for both local and remote operations under what would otherwise be conditions of low or even zero visibility.

  3. 15 CFR 740.16 - Additional permissive reexports (APR).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... supplement No. 1 to part 740), other than North Korea and the commodity being reexported is controlled for... Africa, South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom if: (i) Such cameras are...

  4. 15 CFR 740.16 - Additional permissive reexports (APR).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Supplement No. 1 to part 740), other than North Korea and the commodity being reexported is controlled for... Africa, South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom if: (i) Such cameras are...

  5. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  6. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  7. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Technology for the Creation of Virtual objects in the Real World

    DTIC Science & Technology

    2005-12-14

    control of position/orientation of mobile TV cameras. 9 Unit 9 Force interaction system Unit 6 Helmet mounted displays robot like device drive...joints of the master arm (see Unit 1) which joint coordinates are tracked by the virtual manipulator. Unit 6 . Two displays built in the helmet...special device for simulating the tactile- kinaesthetic effect of immersion. When virtual body is a manipulator it comprises: − master arm with 6

  8. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  9. Automated batch characterization of inkjet-printed elastomer lenses using a LEGO platform.

    PubMed

    Sung, Yu-Lung; Garan, Jacob; Nguyen, Hoang; Hu, Zhenyu; Shih, Wei-Chuan

    2017-09-10

    Small, self-adhesive, inkjet-printed elastomer lenses have enabled smartphone cameras to image and resolve microscopic objects. However, the performance of different lenses within a batch is affected by hard-to-control environmental variables. We present a cost-effective platform to perform automated batch characterization of 300 lens units simultaneously for quality inspection. The system was designed and configured with LEGO bricks, 3D printed parts, and a digital camera. The scheme presented here may become the basis of a high-throughput, in-line inspection tool for quality control purposes and can also be employed for optimization of the manufacturing process.

  10. KSC-01pp1802

    NASA Image and Video Library

    2001-12-01

    KENNEDY SPACE CENTER, Fla. - STS-109 Mission Specialist Richard Lennehan (left) and Payload Commander John Grunsfeld get a feel for tools and equipment that will be used on the mission. The crew is at KSC to take part in Crew Equipment Interface Test activities that include familiarization with the orbiter and equipment. The goal of the mission is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  11. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  12. Optical Indoor Positioning System Based on TFT Technology

    PubMed Central

    Gőzse, István

    2015-01-01

    A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753

  13. Explosive Transient Camera (ETC) Program

    NASA Technical Reports Server (NTRS)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  14. BDPU, Favier places new test chamber into experiment module in LMS-1 Spacelab

    NASA Image and Video Library

    1996-07-09

    STS078-301-021 (20 June - 7 July 1996) --- Payload specialist Jean-Jacques Favier, representing the French Space Agency (CNES), holds up a test container to a Spacelab camera. The test involves the Bubble Drop Particle Unit (BDPU), which Favier is showing to ground controllers at the Marshall Space Flight Center (MSFC) in order to check the condition of the unit prior to heating in the BDPU facility. The test container holds experimental fluid and allows experiment observation through optical windows. BDPU contains three internal cameras that are used to continuously downlink BDPU activity so that behavior of the bubbles can be monitored. Astronaut Richard M. Linnehan, mission specialist, conducts biomedical testing in the background.

  15. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  16. Space imaging infrared optical guidance for autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu

    2008-08-01

    We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best focus image clearly and give the depth of the object from the infrared camera unit.

  17. Quadrotor helicopter for surface hydrological measurements

    NASA Astrophysics Data System (ADS)

    Pagano, C.; Tauro, F.; Porfiri, M.; Grimaldi, S.

    2013-12-01

    Surface hydrological measurements are typically performed through user-assisted and intrusive field methodologies which can be inadequate to monitor remote and extended areas. In this poster, we present the design and development of a quadrotor helicopter equipped with digital acquisition system and image calibration units for surface flow measurements. This custom-built aerial vehicle is engineered to be lightweight, low-cost, highly customizable, and stable to guarantee optimal image quality. Quadricopter stability guarantees minimal vibrations during image acquisition and, therefore, improved accuracy in flow velocity estimation through large scale particle image velocimetry algorithms or particle tracking procedures. Stability during the vehicle pitching and rolling is achieved by adopting large arm span and high-wing configurations. Further, the vehicle framework is composed of lightweight aluminum and durable carbon fiber for optimal resilience. The open source Ardupilot microcontroller is used for remote control of the quadricopter. The microcontroller includes an inertial measurement unit (IMU) equipped with accelerometers and gyroscopes for stable flight through feedback control. The vehicle is powered by a 3 cell (11.1V) 3000 mAh Lithium-polymer battery. Electronic equipment and wiring are hosted into the hollow arms and on several carbon fiber platforms in the waterproof fuselage. Four 35A high-torque motors are supported at the far end of each arm with 10 × 4.7 inch propellers. Energy dissipation during landing is accomplished by four pivoting legs that, through the use of shock absorbers, prevent the impact energy from affecting the frame thus causing significant damage. The data capturing system consists of a GoPro Hero3 camera and in-house built camera gimbal and shock absorber damping device. The camera gimbal, hosted below the vehicle fuselage, is engineered to maintain the orthogonality of the camera axis with respect to the water surface by compensating for changes in pitch and roll during flight. The constant orthogonality of the camera leads to minimal image distortions and, therefore, reduced post-processing for picture dewarping. The gimbal is based on a system of two closed-loop DC motors. The motors are controlled through an open source Martinez V3 brushless controller board and an MPU6050 IMU. The IMU is placed on the back of the camera to read the change in orientation during the flight. To prevent the physical acquisition of ground reference points for image rectification, low power red lasers facing the water surface are placed on each of the quadricopter arms at known distances. The pixel distance between the laser lights in images are then automatically converted to metric units. Experimental results from outdoor testing on water bodies are reported to demonstrate the feasibility of surface water monitoring through this mobile imaging platform.

  18. Design of microcontroller based system for automation of streak camera.

    PubMed

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  19. Design of microcontroller based system for automation of streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor.more » A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.« less

  20. Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.

    ERIC Educational Resources Information Center

    Farajollahi, Karim; And Others

    The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…

  1. Brown at aft controls during PAMS STU deploy

    NASA Image and Video Library

    1996-05-22

    S77-E-5066 (22 May 1996) --- Astronaut Curtis L. Brown, Jr., pilot, is seen on the starboard side of the Space Shuttle Endeavour's aft flight deck just prior to the deployment of the Satellite Test Unit (STU), part of the Passive Aerodynamically Stabilized Magnetically Damped Satellite (PAMS). Brown's image was captured with an Electronic Still Camera (ESC). Minutes later the camera was being used to document the deployment of PAMS-STU. The six-member crew will continue operations (tracking, rendezvousing and station-keeping) with PAMS-STU periodically throughout the remainder of the mission.

  2. Multispectral imaging system for contaminant detection

    NASA Technical Reports Server (NTRS)

    Poole, Gavin H. (Inventor)

    2003-01-01

    An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.

  3. KSC-01pp1760

    NASA Image and Video Library

    2001-11-29

    KENNEDY SPACE CENTER, Fla. -- Fully unwrapped, the Advanced Camera for Surveys, which is suspended by an overhead crane, is checked over by workers. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  4. KSC-04PD-1812

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, United Space Alliance worker Craig Meyer fits an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttles Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  5. KSC-04pd1812

    NASA Image and Video Library

    2004-09-17

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, United Space Alliance worker Craig Meyer fits an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttle’s Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  6. SeaVipers- Computer Vision and Inertial Position/Reference Sensor System (CVIPRSS)

    DTIC Science & Technology

    2015-08-01

    uses an Inertial Measurement Unit (IMU) to detect changes in roll , pitch, and yaw (x-, y-, and z-axis movement). We use a 9DOF Razor IMU from SparkFun... inertial measurement unit (IMU) and cameras that are hardware synchronized to provide close coupling. Several fast food companies, Internet giants like...light cameras [32]. 4.1.4 Inertial Measurement Unit To assist the PTU in video stabilization for the camera and aiming the rangefinder, Sea- Vipers

  7. Mechatronic design of a fully integrated camera for mini-invasive surgery.

    PubMed

    Zazzarini, C C; Patete, P; Baroni, G; Cerveri, P

    2013-06-01

    This paper describes the design features of an innovative fully integrated camera candidate for mini-invasive abdominal surgery with single port or transluminal access. The apparatus includes a CMOS imaging sensor, a light-emitting diode (LED)-based unit for scene illumination, a photodiode for luminance detection, an optical system designed according to the mechanical compensation paradigm, an actuation unit for enabling autofocus and optical zoom, and a control logics based on microcontroller. The bulk of the apparatus is characterized by a tubular shape with a diameter of 10 mm and a length of 35 mm. The optical system, composed of four lens groups, of which two are mobile, has a total length of 13.46 mm and an effective focal length ranging from 1.61 to 4.44 mm with a zoom factor of 2.75×, with a corresponding angular field of view ranging from 16° to 40°. The mechatronics unit, devoted to move the zoom and the focus lens groups, is implemented adopting miniature piezoelectric motors. The control logics implements a closed-loop mechanism, between the LEDs and photodiode, to attain automatic control light. Bottlenecks of the design and some potential issues of the realization are discussed. A potential clinical scenario is introduced.

  8. Innovation in robotic surgery: the Indian scenario.

    PubMed

    Deshpande, Suresh V

    2015-01-01

    Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  9. EVA 3 - Linnehan and Grunsfeld install new PCU

    NASA Image and Video Library

    2002-03-06

    STS109-E-5660 (6 March 2002) --- Astronauts John M. Grunsfeld (top) and Richard M. Linnehan participate in a 6 hour, 48 minute space walk designed to install a new Power Control Unit (PCU) on the Hubble Space Telescope (HST). The two went on to replace the original unit launched with the telescope in April 1990. Grunsfeld is on the end of Columbia's Remote Manipulator System (RMS) robotic arm, controlled from inside the crew cabin by astronaut Nancy J. Currie. The image was recorded with a digital still camera.

  10. Mechanism controller system for the optical spectroscopic and infrared remote imaging system instrument on board the Rosetta space mission

    NASA Astrophysics Data System (ADS)

    Castro Marín, J. M.; Brown, V. J. G.; López Jiménez, A. C.; Rodríguez Gómez, J.; Rodrigo, R.

    2001-05-01

    The optical, spectroscopic infrared remote imaging system (OSIRIS) is an instrument carried on board the European Space Agency spacecraft Rosetta that will be launched in January 2003 to study in situ the comet Wirtanen. The electronic design of the mechanism controller board (MCB) system of the two OSIRIS optical cameras, the narrow angle camera, and the wide angle camera, is described here. The system is comprised of two boards mounted on an aluminum frame as part of an electronics box that contains the power supply and the digital processor unit of the instrument. The mechanisms controlled by the MCB for each camera are the front door assembly and a filter wheel assembly. The front door assembly for each camera is driven by a four phase, permanent magnet stepper motor. Each filter wheel assembly consists of two, eight filter wheels. Each wheel is driven by a four phase, variable reluctance stepper motor. Each motor, for all the assemblies, also contains a redundant set of four stator phase windings that can be energized separately or in parallel with the main windings. All stepper motors are driven in both directions using the full step unipolar mode of operation. The MCB also performs general housekeeping data acquisition of the OSIRIS instrument, i.e., mechanism position encoders and temperature measurements. The electronic design application used is quite new due to use of a field programmable gate array electronic devices that avoid the use of the now traditional system controlled by microcontrollers and software. Electrical tests of the engineering model have been performed successfully and the system is ready for space qualification after environmental testing. This system may be of interest to institutions involved in future space experiments with similar needs for mechanisms control.

  11. Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera

    NASA Astrophysics Data System (ADS)

    Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.

    2017-05-01

    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.

  12. Crime Control Strategies in School: Chicanas'/os' Perceptions and Criminalization

    ERIC Educational Resources Information Center

    Portillos, Edwardo L.; Gonzalez, Juan Carlos; Peguero, Anthony A.

    2012-01-01

    High schools throughout the United States experience problems with violence, drugs, and crime. School administrators have responded with policies and strategies designed to prevent school violence such as zero tolerance approaches, partnerships with law enforcement agencies, security camera installations, and hiring additional security personnel…

  13. QuadCam - A Quadruple Polarimetric Camera for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Skuljan, J.

    A specialised quadruple polarimetric camera for space situational awareness, QuadCam, has been built at the Defence Technology Agency (DTA), New Zealand, as part of collaboration with the Defence Science and Technology Laboratory (Dstl), United Kingdom. The design was based on a similar system originally developed at Dstl, with some significant modifications for improved performance. The system is made up of four identical CCD cameras looking in the same direction, but in a different plane of polarisation at 0, 45, 90 and 135 degrees with respect to the reference plane. A standard set of Stokes parameters can be derived from the four images in order to describe the state of polarisation of an object captured in the field of view. The modified design of the DTA QuadCam makes use of four small Raspberry Pi computers, so that each camera is controlled by its own computer in order to speed up the readout process and ensure that the four individual frames are taken simultaneously (to within 100-200 microseconds). In addition, a new firmware was requested from the camera manufacturer so that an output signal is generated to indicate the state of the camera shutter. A specialised GPS unit (also developed at DTA) is then used to monitor the shutter signals from the four cameras and record the actual time of exposure to an accuracy of about 100 microseconds. This makes the system well suited for the observation of fast-moving objects in the low Earth orbit (LEO). The QuadCam is currently mounted on a Paramount MEII robotic telescope mount at the newly built DTA space situational awareness observatory located on Whangaparaoa Peninsula near Auckland, New Zealand. The system will be used for tracking satellites in low Earth orbit and geostationary belt as well. The performance of the camera has been evaluated and a series of test images have been collected in order to derive the polarimetric signatures for selected satellites.

  14. A navigation and control system for an autonomous rescue vehicle in the space station environment

    NASA Technical Reports Server (NTRS)

    Merkel, Lawrence

    1991-01-01

    A navigation and control system was designed and implemented for an orbital autonomous rescue vehicle envisioned to retrieve astronauts or equipment in the case that they become disengaged from the space station. The rescue vehicle, termed the Extra-Vehicular Activity Retriever (EVAR), has an on-board inertial measurement unit ahd GPS receivers for self state estimation, a laser range imager (LRI) and cameras for object state estimation, and a data link for reception of space station state information. The states of the retriever and objects (obstacles and the target object) are estimated by inertial state propagation which is corrected via measurements from the GPS, the LRI system, or the camera system. Kalman filters are utilized to perform sensor fusion and estimate the state propagation errors. Control actuation is performed by a Manned Maneuvering Unit (MMU). Phase plane control techniques are used to control the rotational and translational state of the retriever. The translational controller provides station-keeping or motion along either Clohessy-Wiltshire trajectories or straight line trajectories in the LVLH frame of any sufficiently observed object or of the space station. The software was used to successfully control a prototype EVAR on an air bearing floor facility, and a simulated EVAR operating in a simulated orbital environment. The design of the navigation system and the control system are presented. Also discussed are the hardware systems and the overall software architecture.

  15. Applied Meteorology Unit (AMU) Quarterly Report Fourth Quarter FY-04

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2004-01-01

    This report summarizes the Applied Meteorology Unit (A MU) activities for the fourth quarter of Fiscal Year 2004 (July -Sept 2004). Tasks covered are: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Shuttle Ascent Camera Cloud Obstruction Forecast, (5) Advanced Regional Prediction System (ARPS) Optimization and Training Extension and (5) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest.

  16. KSC-01pp1730

    NASA Image and Video Library

    2001-11-27

    KENNEDY SPACE CENTER, Fla. -- In the Vertical Processing Facility, members of the STS-109 crew look over the Solar Array 3 panels that will be replacing Solar Array 2 panels on the Hubble Space Telescope (HST). Trainers, at left, point to the panels while Mission Specialist Nancy Currie (second from right) and Commander Scott Altman (far right) look on. Other crew members are Pilot Duane Carey, Payload Commander John Grunsfeld and Mission Specialists James Newman, Richard Linnehan and Michael Massimino. The other goals of the mission are replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  17. KSC-04PD-1810

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, from left, United Space Alliance workers Loyd Turner, Craig Meyer and Erik Visser prepare to conduct a fit check of an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttles Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  18. KSC-04PD-1811

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, from left, United Space Alliance workers Loyd Turner, Craig Meyer and Erik Visser conduct a fit check of an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttles Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  19. KSC-04pd1811

    NASA Image and Video Library

    2004-09-17

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, from left, United Space Alliance workers Loyd Turner, Craig Meyer and Erik Visser conduct a fit check of an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttle’s Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  20. KSC-04pd1810

    NASA Image and Video Library

    2004-09-17

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, from left, United Space Alliance workers Loyd Turner, Craig Meyer and Erik Visser prepare to conduct a fit check of an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttle’s Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  1. Voss with Bonner Ball Neutron Detector Control Unit in Destiny laboratory

    NASA Image and Video Library

    2001-03-23

    ISS002-E-5714 (23 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up the Bonner Ball Neutron Detector (BBND) in the Destiny laboratory. The BBND is connected to the Human Research Facility (HRF). This image was recorded with a digital still camera.

  2. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  3. Cities with camera-equipped taxicabs experience reduced taxicab driver homicide rates: United States, 1996-2010.

    PubMed

    Menéndez, Cammie Chaumont; Amandus, Harlan; Damadi, Parisa; Wu, Nan; Konda, Srinivas; Hendricks, Scott

    2014-05-01

    Driving a taxicab remains one of the most dangerous occupations in the United States, with leading homicide rates. Although safety equipment designed to reduce robberies exists, it is not clear what effect it has on reducing taxicab driver homicides. Taxicab driver homicide crime reports for 1996 through 2010 were collected from 20 of the largest cities (>200,000) in the United States: 7 cities with cameras installed in cabs, 6 cities with partitions installed, and 7 cities with neither cameras nor partitions. Poisson regression modeling using generalized estimating equations provided city taxicab driver homicide rates while accounting for serial correlation and clustering of data within cities. Two separate models were constructed to compare (1) cities with cameras installed in taxicabs versus cities with neither cameras nor partitions and (2) cities with partitions installed in taxicabs versus cities with neither cameras nor partitions. Cities with cameras installed in cabs experienced a significant reduction in homicides after cameras were installed (adjRR = 0.11, CL 0.06-0.24) and compared to cities with neither cameras nor partitions (adjRR = 0.32, CL 0.15-0.67). Cities with partitions installed in taxicabs experienced a reduction in homicides (adjRR = 0.78, CL 0.41-1.47) compared to cities with neither cameras nor partitions, but it was not statistically significant. The findings suggest cameras installed in taxicabs are highly effective in reducing homicides among taxicab drivers. Although not statistically significant, the findings suggest partitions installed in taxicabs may be effective.

  4. KSC01pd1736

    NASA Image and Video Library

    2001-11-26

    KENNEDY SPACE CENTER, Fla. -- A piece of equipment for Hubble Space Telescope Servicing mission is moved inside Hangar AE, Cape Canaveral. In the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  5. KSC-01pp1758

    NASA Image and Video Library

    2001-11-29

    KENNEDY SPACE CENTER, Fla. -- In Hangar A&E, workers watch as an overhead crane lifts the Advanced Camera for Surveys out of its transportation container. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  6. KSC01pd1735

    NASA Image and Video Library

    2001-11-26

    KENNEDY SPACE CENTER, Fla. - A piece of equipment for Hubble Space Telescope Servicing mission arrives at Hangar AE, Cape Canaveral. Inside the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  7. Performances Of The New Streak Camera TSN 506

    NASA Astrophysics Data System (ADS)

    Nodenot, P.; Imhoff, C.; Bouchu, M.; Cavailler, C.; Fleurot, N.; Launspach, J.

    1985-02-01

    The number of streack cameras used in research laboratory has been continuously increased du-ring the past years. The increasing of this type of equipment is due to the development of various measurement techniques in the nanosecond and picosecond range. Among the many different applications, we would mention detonics chronometry measurement, measurement of the speed of matter by means of Doppler-laser interferometry, laser and plasma diagnostics associated with laser-matter interaction. The old range of cameras have been remodelled, in order to standardize and rationalize the production of ultrafast cinematography instruments, to produce a single camera known as TSN 506. Tne TSN 506 is composed of an electronic control unit, built around the image converter tube it can be fitted with a nanosecond sweep circuit covering the whole range from 1 ms to 200 ns or with a picosecond circuit providing streak durations from 1 to 100 ns. We shall describe the main electronic and opto-electronic performance of the TSN 506 operating in these two temporal fields.

  8. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  9. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  10. Imaging System For Measuring Macromolecule Crystal Growth Rates in Microgravity

    NASA Technical Reports Server (NTRS)

    Corder, Eric L.; Briscoe, Jeri

    2004-01-01

    In order to determine how macromolecule crystal quality improvement in microgravity is related to crystal growth characteristics, a team of scientists and engineers at NASA's Marshal Space Flight Center (MSFC) developed flight hardware capable of measuring the crystal growth rates of a population of crystals growing under the same conditions. As crystal growth rate is defined as the change or delta in a defined dimension or length (L) of crystal over time, the hardware was named Delta-L. Delta-L consists of three sub assemblies: a fluid unit including a temperature-controlled growth cell, an imaging unit, and a control unit (consisting of a Data Acquisition and Control Unit (DACU), and a thermal control unit). Delta-L will be used in connection with the Glovebox Integrated Microgravity Isolation Technology (g-LIMIT) inside the Microgravity Science Glovebox (MSG), onboard the International Space Station. This paper will describe the Delta-L imaging system. The Delta-L imaging system was designed to locate, resolve, and capture images of up to 10 individual crystals ranging in size from 10 to 500 microns with a point-to-point accuracy of +/- 2.0 microns within a quartz growth cell observation area of 20 mm x 10 mm x 1 mm. The optical imaging system is comprised of a video microscope camera mounted on computer controlled translation stages. The 3-axis translation stages and control units provide crewmembers the ability to search throughout the growth cell observation area for crystals forming in size of approximately 10 microns. Once the crewmember has selected ten crystals of interest, the growth of these crystals is tracked until the size reaches approximately 500 microns. In order to resolve these crystals an optical system with a magnification of 10X was designed. A black and white NTSC camera was utilized with a 20X microscope objective and a 0.5X custom designed relay lens with an inline light to meet the magnification requirement. The design allows a 500 pm crystal to be viewed in the vertical dimension on a standard NTSC monitor (4:3 aspect ratio). Images of the 10 crystals are collected periodically and stored in sets by the DACU.

  11. Recent developments for the Large Binocular Telescope Guiding Control Subsystem

    NASA Astrophysics Data System (ADS)

    Golota, T.; De La Peña, M. D.; Biddick, C.; Lesser, M.; Leibold, T.; Miller, D.; Meeks, R.; Hahn, T.; Storm, J.; Sargent, T.; Summers, D.; Hill, J.; Kraus, J.; Hooper, S.; Fisher, D.

    2014-07-01

    The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.

  12. Prototype AEGIS: A Pixel-Array Readout Circuit for Gamma-Ray Imaging.

    PubMed

    Barber, H Bradford; Augustine, F L; Furenlid, L; Ingram, C M; Grim, G P

    2005-07-31

    Semiconductor detector arrays made of CdTe/CdZnTe are expected to be the main components of future high-performance, clinical nuclear medicine imaging systems. Such systems will require small pixel-pitch and much larger numbers of pixels than are available in current semiconductor-detector cameras. We describe the motivation for developing a new readout integrated circuit, AEGIS, for use in hybrid semiconductor detector arrays, that may help spur the development of future cameras. A basic design for AEGIS is presented together with results of an HSPICE ™ simulation of the performance of its unit cell. AEGIS will have a shaper-amplifier unit cell and neighbor pixel readout. Other features include the use of a single input power line with other biases generated on-board, a control register that allows digital control of all thresholds and chip configurations and an output approach that is compatible with list-mode data acquisition. An 8×8 prototype version of AEGIS is currently under development; the full AEGIS will be a 64×64 array with 300 μm pitch.

  13. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  14. ETR, TRA642. ETR COMPLEX NEARLY COMPLETE. CAMERA FACES NORTHWEST, PROBABLY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR, TRA-642. ETR COMPLEX NEARLY COMPLETE. CAMERA FACES NORTHWEST, PROBABLY FROM TOP DECK OF COOLING TOWER. SHADOW IS CAST BY COOLING TOWER UNITS OFF LEFT OF VIEW. HIGH-BAY REACTOR BUILDING IS SURROUNDED BY ITS ATTACHED SERVICES: ELECTRICAL (TRA-648), HEAT EXCHANGER (TRA-644 WITH U-SHAPED YARD), AND COMPRESSOR (TRA-643). THE CONTROL BUILDING (TRA-647) ON THE NORTH SIDE IS HIDDEN FROM VIEW. AT UPPER RIGHT IS MTR BUILDING, TRA-603. INL NEGATIVE NO. 56-3798. Jack L. Anderson, Photographer, 11/26/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  15. Video control system for a drilling in furniture workpiece

    NASA Astrophysics Data System (ADS)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  16. Applied Meteorology Unit (AMU) Quarterly Report. First Quarter FY-05

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2005-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2005 (October - December 2005). Tasks reviewed include: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Stable Low Cloud Evaluation, (5) Shuttle Ascent Camera Cloud Obstruction Forecast, (6) Range Standardization and Automation (RSA) and Legacy Wind Sensor Evaluation, (7) Advanced Regional Prediction System (ARPS) Optimization and Training Extension, and (8) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest

  17. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-07

    Inside the Space Shuttle Columbia's cabin, astronaut Nancy J. Currie, mission specialist, controlled the Remote Manipulator System (RMS) on the crew cabin's aft flight deck to assist fellow astronauts during the STS-109 mission Extra Vehicular Activities (EVA). The RMS was used to capture the telescope and secure it into Columbia's cargo bay. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.

  18. The Subject Headings of the Morris Swett Library, USAFAS. Revised.

    DTIC Science & Technology

    1980-05-15

    Royal Armoured Corps. x Armored force. Armored troops. Armored units. Mechanized force. Mechanized units. Mechanized warfare. Tank companies. Tank...g., U. S. Ary- Physical training. CAMERA MOUNTS. CAMERAS, AERIAL. II I • • ! I CAMOUFLAGE. (U 166.3h) x Air arm - amouflage. - Bibliography. - Drape

  19. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    NASA Astrophysics Data System (ADS)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  20. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    PubMed

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  1. Currie in FGB during repair of battery recharger

    NASA Image and Video Library

    1998-12-11

    S88-E-5076 (12-11-98) --- Astronaut Nancy J. Currie, mission specialist, participates in work aboard Zarya. One of Currie's tasks was to replace a faulty unit which controls the discharging of stored energy from one of the module's six batteries. The photo was taken with an electronic still camera (ESC) at 01:58:16 GMT, Dec. 11.

  2. Development of the Research Platform of Small Autonomous Blimp Robot

    NASA Astrophysics Data System (ADS)

    Takaya, Toshihiko; Kawamura, Hidenori; Yamamoto, Masahito; Ohuchi, Azuma

    A blimp robot is attractive as an small flight robot and can float in the air by buoyancy and realize safe to the crash small flight with low energy and can movement for a long time compared with other flight robots with low energy and can movement for a long time compared with other flight robots. However, control of an airplane robot is difficult for the nonlinear characteristic exposed to inertia by the air flow in response to influence. Therefore, the applied research which carried out the maximum use of such in recent years a blimp robot's feature is prosperous. In this paper, we realized development of blimp robot for research which can be used general-purpose by carrying out clue division of the blimp robot body at a unit, and constituting and building for research of blimp robot, and application development. On the other hand, by developing a general-purpose blimp robot research platform, improvement in the research efficiency of many researchers can be attained, and further, research start of blimp robot becomes easy and contributes to development of research. We performed the experiments for the above-mentioned proof. 1. Checked basic keeping position performance and that various orbital operation was possible. And the unit exchange ease of software unit was checked by the experiment which exchanges the control layer of software for learning control from PID control, and carries out comparison of operation. 2. In order to check the exchange ease of hardware unit, the sensor was exchanged for the microphon from the camera, and control of operation was checked. 3. For the unit addition ease, the microphon which carries out sound detection with the picture detection with a camera was added, and control of operation was verified. 4. The unit exchange was carried out for the check of a function addition and the topological map generation experiment by addition of an ultrasonic sensor was conducted. Developed blimp robot for research mounted the exchange ease and the additional ease of a unit in hardware using an analog and digital I/F fomenting realized in the combination of the software module of a layered structure in software was performed. Consequently, an addition and exchange of a function were able to become easy and were able to realize the research platform of blimp robot.

  3. Initial Performance of the Attitude Control and Aspect Determination Subsystems on the Chandra Observatory

    NASA Technical Reports Server (NTRS)

    Cameron, R.; Aldcroft, T.; Podgorski, W. A.; Freeman, M. D.

    2000-01-01

    The aspect determination system of the Chandra X-ray Observatory plays a key role in realizing the full potential of Chandra's X-ray optics and detectors. We review the performance of the spacecraft hardware components and sub-systems, which provide information for both real time control of the attitude and attitude stability of the Chandra Observatory and also for more accurate post-facto attitude reconstruction. These flight components are comprised of the aspect camera (star tracker) and inertial reference units (gyros), plus the fiducial lights and fiducial transfer optics which provide an alignment null reference system for the science instruments and X-ray optics, together with associated thermal and structural components. Key performance measures will be presented for aspect camera focal plane data, gyro performance both during stable pointing and during maneuvers, alignment stability and mechanism repeatability.

  4. Viking lander camera radiometry calibration report, volume 2

    NASA Technical Reports Server (NTRS)

    Wolf, M. R.; Atwood, D. L.; Morrill, M. E.

    1977-01-01

    The requirements the performance validation, and interfaces for the RADCAM program, to convert Viking lander camera image data to radiometric units were established. A proposed algorithm is described, and an appendix summarizing the planned reduction of camera test data was included.

  5. LAMOST CCD camera-control system based on RTS2

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  6. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  7. Photography in Dermatologic Surgery: Selection of an Appropriate Lighting Solution for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2018-01-01

    Lighting is an important component of consistent, high-quality dermatologic photography. There are different types of lighting solutions available. To evaluate currently available lighting equipment and methods suitable for procedural dermatology. Overhead lighting, built-in camera flashes, external flash units, studio strobes, and light-emitting diode (LED) light panels were evaluated with regard to their utility for dermatologic surgeons. A set of ideal lighting characteristics was used to examine the capabilities and limitations of each type of lighting solution. Recommendations regarding lighting solutions and optimal usage configurations were made in terms of the context of the clinical environment and the purpose of the image. Overhead lighting may be a convenient option for general documentation. An on-camera lighting solution using a built-in camera flash or a camera-mounted external flash unit provides portability and consistent lighting with minimal training. An off-camera lighting solution with studio strobes, external flash units, or LED light panels provides versatility and even lighting with minimal shadows and glare. The selection of an optimal lighting solution is contingent on practical considerations and the purpose of the image.

  8. History of the formerly top secret KH-9 Hexagon spy satellite

    NASA Astrophysics Data System (ADS)

    Pressel, Phil

    2014-12-01

    This paper is about the development, design, fabrication and use of the KH-9 Hexagon spy in the sky satellite camera system that was finally declassified by the National Reconnaissance Office on September 17, 2011 twenty five years after the program ended. It was the last film based reconnaissance camera and was known by experts in the field as "the most complicated system ever put up in orbit." It provided important intelligence for the United States government and was the reason that President Nixon was able to sign the SALT treaty, and when President Reagan said "Trust but Verify" it provided the means of verification. Each satellite weighed 30,000 pounds and carried two cameras thereby permitting photographs of the entire landmass of the earth to be taken in stereo. Each camera carried up to 30 miles of film for a total of 60 miles of film. Ultra-complex mechanisms controlled the structurally "wimpy" film that traveled at speeds up to 204 inches per second at the focal plane and was perfectly synchronized to the optical image.

  9. Energy discriminating x-ray camera utilizing a cadmium telluride detector

    NASA Astrophysics Data System (ADS)

    Sato, Eiichi; Purkhet, Abderyim; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Wantanabe, Manabu; Nagao, Jiro; Nomiya, Seiichiro; Hitomi, Keitaro; Tanaka, Etsuro; Kawai, Toshiaki; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2009-07-01

    An energy-discriminating x-ray camera is useful for performing monochromatic radiography using polychromatic x rays. This x-ray camera was developed to carry out K-edge radiography using iodine-based contrast media. In this camera, objects are exposed by a cone beam from a cerium x-ray generator, and penetrating x-ray photons are detected by a cadmium telluride detector with an amplifier unit. The optimal x-ray photon energy and the energy width are selected out using a multichannel analyzer, and the photon number is counted by a counter card. Radiography was performed by the detector scanning using an x-y stage driven by a two-stage controller, and radiograms obtained by energy discriminating are shown on a personal computer monitor. In radiography, the tube voltage and current were 60 kV and 36 μA, respectively, and the x-ray intensity was 4.7 μGy/s. Cerium K-series characteristic x rays are absorbed effectively by iodine-based contrast media, and iodine K-edge radiography was performed using x rays with energies just beyond iodine K-edge energy 33.2 keV.

  10. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  11. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  12. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  13. Observation sequences and onboard data processing of Planet-C

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.

    Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space

  14. Effect of camera angulation on adaptation of CAD/CAM restorations.

    PubMed

    Parsell, D E; Anderson, B C; Livingston, H M; Rudd, J I; Tankersley, J D

    2000-01-01

    A significant concern with computer-assisted design/computer-assisted manufacturing (CAD/CAM)-produced prostheses is the accuracy of adaptation of the restoration to the preparation. The objective of this study is to determine the effect of operator-controlled camera misalignment on restoration adaptation. A CEREC 2 CAD/CAM unit (Sirona Dental Systems, Bensheim, Germany) was used to capture the optical impressions and machine the restorations. A Class I preparation was used as the standard preparation for optical impressions. Camera angles along the mesio-distal and buccolingual alignment were varied from the ideal orientation. Occlusal marginal gaps and sample height, width, and length were measured and compared to preparation dimensions. For clinical correlation, clinicians were asked to take optical impressions of mesio-occlusal preparations (Class II) on all four second molar sites, using a patient simulator. On the adjacent first molar occlusal surfaces, a preparation was machined such that camera angulation could be calculated from information taken from the optical impression. Degree of tilt and plane of tilt were compared to the optimum camera positions for those preparations. One-way analysis of variance and Dunnett C post hoc testing (alpha = 0.01) revealed little significant degradation in fit with camera angulation. Only the apical length fit was significantly degraded by excessive angulation. The CEREC 2 CAD/CAM system was found to be relatively insensitive to operator-induced errors attributable to camera misalignments of less than 5 degrees in either the buccolingual or the mesiodistal plane. The average camera tilt error generated by clinicians for all sites was 1.98 +/- 1.17 degrees.

  15. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  16. Rotatable prism for pan and tilt

    NASA Technical Reports Server (NTRS)

    Ball, W. B.

    1980-01-01

    Compact, inexpensive, motor-driven prisms change field of view of TV camera. Camera and prism rotate about lens axis to produce pan effect. Rotating prism around axis parallel to lens produces tilt. Size of drive unit and required clearance are little more than size of camera.

  17. Camera Control and Geo-Registration for Video Sensor Networks

    NASA Astrophysics Data System (ADS)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  18. Evaluating detection and monitoring tools for incipient and relictual non-native ungulate populations

    USGS Publications Warehouse

    Judge, Seth W.; Hess, Steve; Faford, Jonathan K.J.; Pacheco, Dexter; Leopold, Christina R.; Cole, Colleen; Deguzman, Veronica

    2016-01-01

    Hawai‘i Volcanoes National Park (HAVO) encompasses 1,308 km2 on Hawai‘i Island. The park harbors endemic plants and animals which are threatened by a variety of invasive species. Introduced ungulates have caused sharp declines of numerous endemic species and have converted ecosystems to novel grazing systems in many cases. Local ranchers and the Territorial Government of Hawai‘i had long conducted regional ungulate control even prior to the establishment of HAVO in 1916. In 1995 the park’s hunting team began a new hunt database that allowed managers to review hunt effort and effectiveness in each management unit. Target species included feral pigs (Sus scrofa), European mouflon sheep (Ovis gmelini musimon), feral goats (Capra hircus) and wild cattle (Bos taurus). Hunters removed 1,204 feral pigs from HAVO over a 19-year period (1996‒2014). A variety of methods were employed, but trapping, snaring and ground hunts with dogs accounted for the most kills. Trapping yielded the most animals per unit effort. Hunters and volunteers removed 6,657 mouflon from HAVO; 6,601 of those were from the 468 km2 Kahuku Unit. Aerial hunts yielded the most animals followed by ground hunt methods. Hunters completed eradications of goats in several management units over an 18- year period (1997‒2014) when they removed the last 239 known individuals in HAVO primarily with aerial hunts. There have also been seven cattle and five feral dogs (Canis familiaris) removed from HAVO. Establishing benchmarks and monitoring the success of on-the-ground ungulate removal efforts can improve the efficiency of protecting and restoring native forest for high-priority watersheds and native wildlife. We tested a variety of methods to detect small populations of ungulates within HAVO and the Hō‘ili Wai study area in the high-priority watershed of Ka‘ū Forest Reserve on Hawai‘i Island. We conducted ground surveys, aerial surveys and continuous camera trap monitoring in both fence-enclosed units and unenclosed units where populations of introduced mouflon and feral pigs threatened sensitive native plants and forest bird habitats. Beginning in June 2014, twenty infrared camera traps were positioned in areas occupied by ungulates. The cameras were active for at most 198 days, and then half of the cameras were baited with oats and salt blocks for 126 days. There were a total of 1,496 observations of mouflon captured on camera, totaling 2,592 individuals: 1,020 ewes, 900 rams, 276 lambs, and 396 sheep of unknown sex. There were no detections of the illegally introduced axis deer (Axis axis). There were 11 observations of feral pigs and 109 observations of other animals (birds, rats, and other small mammals), including one detection of the federally endangered Hawaiian hawk (Buteo solitarius). Mouflon detection rates did not increase near baited cameras until three months after the initial baiting. Ground-based surveys for ungulate presence were conducted along six transects in Kahuku in October 2014. Evidence of ungulates were detected in 27.5% of plots surveyed within an unenclosed unit, while an enclosed unit had sign in only 3.6% of plots surveyed. An aerial survey by helicopter was conducted in October 2014. A total of 378 mouflon were detected during the survey: 192 in the Kahuku Paddocks, 186 in the Kahuku East unit and no mouflon were detected in the actively controlled Mauka unit. Two baseline ungulate surveys have been completed at the Hō‘ili Wai study area in the highpriority watershed of Ka‘ū Forest Reserve adjacent to Kahuku prior to the completion of an exclusionary ungulate fence. Ground-based surveys were conducted on four transects within a 4.99 km2 area on 5 August and 5–6 November 2014. In August, 20.71% of 565 plots surveyed 2 had fresh or intermediate ungulate sign. In November, 17.41% of 557 plots surveyed had fresh or intermediate ungulate sign. These surveys represent baseline levels of ungulate activity prior to management; therefore comparative inferences can be made about ungulate distribution and relative abundance, but inferences about absolute abundance cannot be made until all ungulates have been removed from the enclosed area. Additional ground-based surveys will be conducted when the fenced area has been fully enclosed, and until ungulate removals have been completed.

  19. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  20. Determining the Performance of Fluorescence Molecular Imaging Devices using Traceable Working Standards with SI Units of Radiance

    PubMed Central

    Zhu, Banghe; Rasmussen, John C.; Litorja, Maritoni

    2017-01-01

    To date, no emerging preclinical or clinical near-infrared fluorescence (NIRF) imaging devices for non-invasive and/or surgical guidance have their performances validated on working standards with SI units of radiance that enable comparison or quantitative quality assurance. In this work, we developed and deployed a methodology to calibrate a stable, solid phantom for emission radiance with units of mW · sr−1 · cm−2 for use in characterizing the measurement sensitivity of ICCD and IsCMOS detection, signal-to-noise ratio, and contrast. In addition, at calibrated radiances, we assess transverse and lateral resolution of ICCD and IsCMOS camera systems. The methodology allowed determination of superior SNR of the ICCD over the IsCMOS camera system and superior resolution of the IsCMOS over the ICCD camera system. Contrast depended upon the camera settings (binning and integration time) and gain of intensifier. Finally, because of architecture of CMOS and CCD camera systems resulting in vastly different performance, we comment on the utility of these systems for small animal imaging as well as clinical applications for non-invasive and surgical guidance. PMID:26552078

  1. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  2. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  3. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  4. Do you think you have what it takes to set up a long-term video monitoring unit?

    PubMed

    Smith, Sheila L

    2006-03-01

    The single most important factor when setting up a long-term video monitoring unit is research. Research all vendors by traveling to other sites and calling other facilities. Considerations with equipment include the server, acquisition units, review units, cameras, software, and monitors as well as other factors including Health Insurance Portability and Accountability Act (HIPAA) compliance. Research customer support including both field and telephone support. Involve your Clinical Engineering Department in your investigations. Be sure to obtain warranty information. Researching placement of the equipment is essential. Communication with numerous groups is vital. Administration, engineers, clinical engineering, physicians, infection control, environmental services, house supervisors, security, and all involved parties should be involved in the planning.

  5. Skylab

    NASA Image and Video Library

    1970-01-01

    This 1970 photograph shows the flight unit for Skylab's White Light Coronagraph, an Apollo Telescope Mount (ATM) facility that photographed the solar corona in the visible light spectrum. A TV camera in the instrument provided real-time pictures of the occulted Sun to the astronauts at the control console and also transmitted the images to the ground. The Marshall Space Flight Center had program management responsibility for the development of Skylab hardware and experiments.

  6. Microgravity combustion experiment using high altitude balloon.

    NASA Astrophysics Data System (ADS)

    Kan, Yuji

    In JAXA, microgravity experiment system using a high altitude balloon was developed , for good microgravity environment and short turn-around time. In this publication, I give an account of themicrogravity experiment system and a combustion experiment to utilize the system. The balloon operated vehicle (BOV) as a microgravity experiment system was developed from 2004 to 2009. Features of the BOV are (1) BOV has double capsule structure. Outside-capsule and inside-capsule are kept the non-contact state by 3-axis drag-free control. (2) The payload is spherical shape and itsdiameter is about 300 mm. (3) Keep 10-4 G level microgravity environment for about 30 seconds However, BOV’s payload was small, and could not mount large experiment module. In this study, inherits the results of past, we established a new experimental system called “iBOV” in order toaccommodate larger payload. Features of the iBOV are (1) Drag-free control use for only vertical direction. (2) The payload is a cylindrical shape and its size is about 300 mm in diameter and 700 mm in height. (3) Keep 10-3-10-4 G level microgravity environment for about 30 seconds We have "Observation experiment of flame propagation behavior of the droplets column" as experiment using iBOV. This experiment is a theme that was selected first for technical demonstration of iBOV. We are conducting the flame propagation mechanism elucidation study of fuel droplets array was placed at regular intervals. We conducted a microgravity experiments using TEXUS rocket ESA and drop tower. For this microgravity combustion experiment using high altitude balloon, we use the Engineering Model (EM) for TEXUS rocket experiment. The EM (This payload) consists of combustion vessel, droplets supporter, droplets generator, fuel syringe, igniter, digital camera, high-speed camera. And, This payload was improved from the EM as follows. 1. Add a control unit. 2. Add inside batteries for control unit and heater of combustion vessel. 3. Update of the cameras for the observation. In this experiment, we heat air in the combustion vessel to 500K, before microgravity. And during microgravity, we conduct to the follows. (1) Generate five droplets on the droplets supporter. (2) Moving droplets into combustion vessel. (3) Ignition of an edge droplet of the array using igniter. And during combustion experiment, cameras take movies of combustion phenomena. We plan to conduct this experiment in May 2014.

  7. Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.

    Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.

  8. Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.

    ERIC Educational Resources Information Center

    Crummett, Dan

    This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…

  9. A New Remote Sensing Filter Radiometer Employing a Fabry-Perot Etalon and a CCD Camera for Column Measurements of Methane in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Georgieva, E. M.; Huang, W.; Heaps, W. S.

    2012-01-01

    A portable remote sensing system for precision column measurements of methane has been developed, built and tested at NASA GSFC. The sensor covers the spectral range from 1.636 micrometers to 1.646 micrometers, employs an air-gapped Fabry-Perot filter and a CCD camera and has a potential to operate from a variety of platforms. The detector is an XS-1.7-320 camera unit from Xenics Infrared solutions which combines an uncooled InGaAs detector array working up to 1.7 micrometers. Custom software was developed in addition to the graphical user basic interface X-Control provided by the company to help save and process the data. The technique and setup can be used to measure other trace gases in the atmosphere with minimal changes of the etalon and the prefilter. In this paper we describe the calibration of the system using several different approaches.

  10. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-01

    Carrying the STS-109 crew of seven, the Space Shuttle Orbiter Columbia blasted from its launch pad as it began its 27th flight and 108th flight overall in NASA's Space Shuttle Program. Launched March 1, 2002, the goal of the mission was the maintenance and upgrade of the Hubble Space Telescope (HST) which was developed, designed, and constructed by the Marshall Space Flight Center. Captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, the HST received the following upgrades: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when it original coolant ran out. Four of the crewmembers performed 5 space walks in the 10 days, 22 hours, and 11 minutes of the the STS-109 mission.

  11. The 1997 Spring Regression of the Martian South Polar Cap: Mars Orbiter Camera Observations

    USGS Publications Warehouse

    James, P.B.; Cantor, B.A.; Malin, M.C.; Edgett, K.; Carr, M.H.; Danielson, G.E.; Ingersoll, A.P.; Davies, M.E.; Hartmann, W.K.; McEwen, A.S.; Soderblom, L.A.; Thomas, P.C.; Veverka, J.

    2000-01-01

    The Mars Orbiter cameras (MOC) on Mars Global Surveyor observed the south polar cap of Mars during its spring recession in 1997. The images acquired by the wide angle cameras reveal a pattern of recession that is qualitatively similar to that observed by Viking in 1977 but that does differ in at least two respects. The 1977 recession in the 0o to 120o longitude sector was accelerated relative to the 1997 observations after LS = 240o; the Mountains of Mitchel also detached from the main cap earlier in 1997. Comparison of the MOC images with Mars Orbiter Laser Altimeter data shows that the Mountains of Mitchel feature is controlled by local topography. Relatively dark, low albedo regions well within the boundaries of the seasonal cap were observed to have red-to-violet ratios that characterize them as frost units rather than unfrosted or partially frosted ground; this suggests the possibility of regions covered by CO2 frost having different grain sizes.

  12. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  13. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera narrow-angle images

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.

    2003-01-01

    We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.

  15. Improvement of the control of a gas metal arc welding process

    NASA Astrophysics Data System (ADS)

    Gött, Gregor; Schöpp, Heinz; Hofmann, Frank; Heinz, Gerd

    2010-02-01

    Up to now, the use of the electrical characteristics for process control is state of the art in gas metal arc welding (GMAW). The aim of the work is the improvement of GMAW processes by using additional information from the arc. Therefore, the emitted light of the arc is analysed spectroscopically and compared with high-speed camera images. With this information, a conclusion about the plasma arc and the droplet formation is reasonable. With the correlation of the spectral and local information of the plasma, a specific control of the power supply can be applied. A corresponding spectral control unit (SCU) is introduced.

  16. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  17. Infrared Camera Characterization of Bi-Propellant Reaction Control Engines during Auxiliary Propulsion Systems Tests at NASA's White Sands Test Facility in Las Cruces, New Mexico

    NASA Technical Reports Server (NTRS)

    Holleman, Elizabeth; Sharp, David; Sheller, Richard; Styron, Jason

    2007-01-01

    This paper describes the application of a FUR Systems A40M infrared (IR) digital camera for thermal monitoring of a Liquid Oxygen (LOX) and Ethanol bi-propellant Reaction Control Engine (RCE) during Auxiliary Propulsion System (APS) testing at the National Aeronautics & Space Administration's (NASA) White Sands Test Facility (WSTF) near Las Cruces, New Mexico. Typically, NASA has relied mostly on the use of ThermoCouples (TC) for this type of thermal monitoring due to the variability of constraints required to accurately map rapidly changing temperatures from ambient to glowing hot chamber material. Obtaining accurate real-time temperatures in the JR spectrum is made even more elusive by the changing emissivity of the chamber material as it begins to glow. The parameters evaluated prior to APS testing included: (1) remote operation of the A40M camera using fiber optic Firewire signal sender and receiver units; (2) operation of the camera inside a Pelco explosion proof enclosure with a germanium window; (3) remote analog signal display for real-time monitoring; (4) remote digital data acquisition of the A40M's sensor information using FUR's ThermaCAM Researcher Pro 2.8 software; and (5) overall reliability of the system. An initial characterization report was prepared after the A40M characterization tests at Marshall Space Flight Center (MSFC) to document controlled heat source comparisons to calibrated TCs. Summary IR digital data recorded from WSTF's APS testing is included within this document along with findings, lessons learned, and recommendations for further usage as a monitoring tool for the development of rocket engines.

  18. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.

  19. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1993-01-01

    This investigation has evaluated the voice-commanded camera control concept. For this particular task, total voice control of continuous and discrete camera functions was significantly slower than manual control. There was no significant difference between voice and manual input for several types of errors. There was not a clear trend in subjective preference of camera command input modality. Task performance, in terms of both accuracy and speed, was very similar across both levels of experience.

  20. Influence of Fault-Controlled Topography on Fluvio-Deltaic Sedimentary Systems in Eberswalde Crater, Mars

    NASA Technical Reports Server (NTRS)

    Rice, Melissa S.; Gupta, Sanjeev; Bell, James F., III; Warner, Nicholas H.

    2011-01-01

    Eberswalde crater was selected as a candidate landing site for the Mars Science Laboratory (MSL) mission based on the presence of a fan-shaped sedimentary deposit interpreted as a delta. We have identified and mapped five other candidate fluvio -deltaic systems in the crater, using images and digital terrain models (DTMs) derived from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) and Context Camera (CTX). All of these systems consist of the same three stratigraphic units: (1) an upper layered unit, conformable with (2) a subpolygonally fractured unit, unconformably overlying (3) a pitted unit. We have also mapped a system of NNE-trending scarps interpreted as dip-slip faults that pre-date the fluvial -lacustrine deposits. The post-impact regional faulting may have generated the large-scale topography within the crater, which consists of a Western Basin, an Eastern Basin, and a central high. This topography subsequently provided depositional sinks for sediment entering the crater and controlled the geomorphic pattern of delta development.

  1. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  2. STS-109 Onboard Photo of Extra-Vehicular Activity (EVA)

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This is an onboard photo of Astronaut John M. Grunsfield, STS-109 payload commander, participating in the third of five spacewalks to perform work on the Hubble Space Telescope (HST). On this particular walk, Grunsfield, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its power control unit (PCU), the heart of the HST's power system. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed system upgrades to the HST. Included in those upgrades were: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  3. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-06

    This is an onboard photo of Astronaut John M. Grunsfield, STS-109 payload commander, participating in the third of five spacewalks to perform work on the Hubble Space Telescope (HST). On this particular walk, Grunsfield, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its power control unit (PCU), the heart of the HST's power system. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed system upgrades to the HST. Included in those upgrades were: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  4. Optical Transient Monitor (OTM) for BOOTES Project

    NASA Astrophysics Data System (ADS)

    Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.

    2003-04-01

    The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.

  5. A portable digital microphotography unit for rapid documentation of periungual nailfold capillary changes in autoimmune connective tissue diseases.

    PubMed

    Sontheimer, Richard D

    2004-03-01

    While employing a DermLite dermoscopy unit to assess pigment pattern networks in melanocytic skin lesions, it was observed that this compact, portable dermoscopy unit can also be used to quickly detect nailfold capillary changes when entertaining a diagnosis of autoimmune connective tissue diseases (CTD) such as dermatomyositis (DM), scleroderma/systemic sclerosis (SSc), or systemic lupus erythematosus. Aware that the suppliers of the DermLite dermoscopy unit also market a portable digital microphotography unit based on the DermLite optical principles for efficiently documenting cutaneous pigment network patterns, we investigated whether this unit (DermLite Foto flash unit attached to a Nikon Coolpix digital camera) might be used to photographically document nailfold capillary changes in patients with autoimmune CTD. A DermLite Foto flash unit attached to a Nikon Coolpix digital camera was used in a controlled observational study to obtain digital photographs of nailfold capillaries in a small sequential sample of patients with autoimmune CTD attending a rheumatic skin disease subspecialty clinic in an academic department of dermatology. The digital microphotography system proved to be highly useful in documenting the nailfold vascular changes observed in a small sample of patients with DM. We observed that the nailfold capillary changes seen in patients with clinically amyopathic DM were qualitatively and quantitatively similar to those seen in patients with classical DM. Digital microphotography systems designed for examining pigmented skin lesions can be used easily to document nailfold capillary changes often observed in DM and SSc. Nailfold capillary changes documented in this manner appear to be indistinguishable in clinically amyopathic DM and classical DM.

  6. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  7. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  8. 15 CFR 744.9 - Restrictions on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... reexports of cameras controlled by ECCN 6A003.b.4.b. 744.9 Section 744.9 Commerce and Foreign Trade... on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b. (a) General prohibitions... license is required to export or reexport to any destination other than Canada cameras described in ECCN...

  9. 15 CFR 744.9 - Restrictions on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reexports of cameras controlled by ECCN 6A003.b.4.b. 744.9 Section 744.9 Commerce and Foreign Trade... on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b. (a) General prohibitions... license is required to export or reexport to any destination other than Canada cameras described in ECCN...

  10. 15 CFR 744.9 - Restrictions on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reexports of cameras controlled by ECCN 6A003.b.4.b. 744.9 Section 744.9 Commerce and Foreign Trade... on certain exports and reexports of cameras controlled by ECCN 6A003.b.4.b. (a) General prohibitions... license is required to export or reexport to any destination other than Canada cameras described in ECCN...

  11. Energy-efficient lighting system for television

    DOEpatents

    Cawthorne, Duane C.

    1987-07-21

    A light control system for a television camera comprises an artificial light control system which is cooperative with an iris control system. This artificial light control system adjusts the power to lamps illuminating the camera viewing area to provide only sufficient artificial illumination necessary to provide a sufficient video signal when the camera iris is substantially open.

  12. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  13. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  14. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  15. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  16. STS-31 MS Sullivan and Pilot Bolden monitor SE 82-16 Ion Arc on OV-103 middeck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-31 Mission Specialist (MS) Kathryn D. Sullivan monitors and advises ground controllers of the activity inside the Student Experiment (SE) 82-16, Ion arc - studies of the effects of microgravity and a magnetic field on an electric arc, mounted in front of the middeck lockers aboard Discovery, Orbiter Vehicle (OV) 103. Pilot Charles F. Bolden uses a video camera and an ARRIFLEX motion picture camera to record the activity inside the special chamber. A sign in front of the experiment reads 'SSIP 82-16 Greg's Experiment Happy Graduation from STS-31.' SSIP stands for Shuttle Student Involvement Program. Gregory S. Peterson who developed the experiment (Greg's Experiment) is a student at Utah State University and monitored the experiment's operation from JSC's Mission Control Center (MCC) during the flight. Decals displayed in the background on the orbiter galley represent the Hubble Space Telescope (HST), the United States (U.S.) Naval Reserve, Navy Oceanographers, U.S. Navy, and Univer

  17. Concentration solar power optimization system and method of using the same

    DOEpatents

    Andraka, Charles E

    2014-03-18

    A system and method for optimizing at least one mirror of at least one CSP system is provided. The system has a screen for displaying light patterns for reflection by the mirror, a camera for receiving a reflection of the light patterns from the mirror, and a solar characterization tool. The solar characterization tool has a characterizing unit for determining at least one mirror parameter of the mirror based on an initial position of the camera and the screen, and a refinement unit for refining the determined parameter(s) based on an adjusted position of the camera and screen whereby the mirror is characterized. The system may also be provided with a solar alignment tool for comparing at least one mirror parameter of the mirror to a design geometry whereby an alignment error is defined, and at least one alignment unit for adjusting the mirror to reduce the alignment error.

  18. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  19. Autonomous microsystems for ground observation (AMIGO)

    NASA Astrophysics Data System (ADS)

    Laou, Philips

    2005-05-01

    This paper reports the development of a prototype autonomous surveillance microsystem AMIGO that can be used for remote surveillance. Each AMIGO unit is equipped with various sensors and electronics. These include passive infrared motion sensor, acoustic sensor, uncooled IR camera, electronic compass, global positioning system (GPS), and spread spectrum wireless transceiver. The AMIGO unit was configured to multipoint (AMIGO units) to point (base station) communication mode. In addition, field trials were conducted with AMIGO in various scenarios. These scenarios include personnel and vehicle intrusion detection (motion or sound) and target imaging; determination of target GPS position by triangulation; GPS position real time tracking; entrance event counting; indoor surveillance; and aerial surveillance on a radio controlled model plane. The architecture and test results of AMIGO will be presented.

  20. Head-coupled remote stereoscopic camera system for telepresence applications

    NASA Astrophysics Data System (ADS)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  1. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  2. Safety evaluation of red-light cameras

    DOT National Transportation Integrated Search

    2005-04-01

    The objective of this final study was to determine the effectiveness of red-light-camera (RLC) systems in reducing crashes. The study used empirical Bayes before-and-after research using data from seven jurisdictions across the United States at 132 t...

  3. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  4. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  5. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  6. 15 CFR 742.4 - National security.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...

  7. You can't touch this: touch-free navigation through radiological images.

    PubMed

    Ebert, Lars C; Hatch, Gary; Ampanozi, Garyfalia; Thali, Michael J; Ross, Steffen

    2012-09-01

    Keyboards, mice, and touch screens are a potential source of infection or contamination in operating rooms, intensive care units, and autopsy suites. The authors present a low-cost prototype of a system, which allows for touch-free control of a medical image viewer. This touch-free navigation system consists of a computer system (IMac, OS X 10.6 Apple, USA) with a medical image viewer (OsiriX, OsiriX foundation, Switzerland) and a depth camera (Kinect, Microsoft, USA). They implemented software that translates the data delivered by the camera and a voice recognition software into keyboard and mouse commands, which are then passed to OsiriX. In this feasibility study, the authors introduced 10 medical professionals to the system and asked them to re-create 12 images from a CT data set. They evaluated response times and usability of the system compared with standard mouse/keyboard control. Users felt comfortable with the system after approximately 10 minutes. Response time was 120 ms. Users required 1.4 times more time to re-create an image with gesture control. Users with OsiriX experience were significantly faster using the mouse/keyboard and faster than users without prior experience. They rated the system 3.4 out of 5 for ease of use in comparison to the mouse/keyboard. The touch-free, gesture-controlled system performs favorably and removes a potential vector for infection, protecting both patients and staff. Because the camera can be quickly and easily integrated into existing systems, requires no calibration, and is low cost, the barriers to using this technology are low.

  8. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  9. AERCam Autonomy: Intelligent Software Architecture for Robotic Free Flying Nanosatellite Inspection Vehicles

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.

    2006-01-01

    The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.

  10. Assessing the Reliability and the Accuracy of Attitude Extracted from Visual Odometry for LIDAR Data Georeferencing

    NASA Astrophysics Data System (ADS)

    Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.

    2017-08-01

    Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.

  11. A direct-view customer-oriented digital holographic camera

    NASA Astrophysics Data System (ADS)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  12. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  13. An arc control and protection system for the JET lower hybrid antenna based on an imaging system.

    PubMed

    Figueiredo, J; Mailloux, J; Kirov, K; Kinna, D; Stamp, M; Devaux, S; Arnoux, G; Edwards, J S; Stephen, A V; McCullen, P; Hogben, C

    2014-11-01

    Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguides facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.

  14. The South African Astronomical Observatory instrumentation software architecture and the SHOC instruments

    NASA Astrophysics Data System (ADS)

    van Gend, Carel; Lombaard, Briehan; Sickafoose, Amanda; Whittal, Hamish

    2016-07-01

    Until recently, software for instruments on the smaller telescopes at the South African Astronomical Observatory (SAAO) has not been designed for remote accessibility and frequently has not been developed using modern software best-practice. We describe a software architecture we have implemented for use with new and upgraded instruments at the SAAO. The architecture was designed to allow for multiple components and to be fast, reliable, remotely- operable, support different user interfaces, employ as much non-proprietary software as possible, and to take future-proofing into consideration. Individual component drivers exist as standalone processes, communicating over a network. A controller layer coordinates the various components, and allows a variety of user interfaces to be used. The Sutherland High-speed Optical Cameras (SHOC) instruments incorporate an Andor electron-multiplying CCD camera, a GPS unit for accurate timing and a pair of filter wheels. We have applied the new architecture to the SHOC instruments, with the camera driver developed using Andor's software development kit. We have used this to develop an innovative web-based user-interface to the instrument.

  15. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-08

    After five days of service and upgrade work on the Hubble Space Telescope (HST), the STS-109 crew photographed the giant telescope in the shuttle's cargo bay. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where 4 of the 7-member crew performed 5 space walks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. Launched March 1, 2002, the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  16. n/a

    NASA Image and Video Library

    2002-03-09

    After five days of service and upgrade work on the Hubble Space Telescope (HST), the STS-109 crew photographed the giant telescope returning to its normal routine. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where 4 of the 7-member crew performed 5 space walks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near- Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. Launched March 1, 2002, the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  17. Mitsubishi thermal imager using the 512 x 512 PtSi focal plane arrays

    NASA Astrophysics Data System (ADS)

    Fujino, Shotaro; Miyoshi, Tetsuo; Yokoh, Masataka; Kitahara, Teruyoshi

    1990-01-01

    MITSUBISHI THERMAL IMAGER model IR-5120A is high resolution and high sensitivity infrared television imaging system. It was exhibited in SPIE'S 1988 Technical Symposium on OPTICS, ELECTRO-OPTICS, and SENSORS, held at April 1988 Orlando, and acquired interest of many attendants of the symposium for it's high performance. The detector is a Platinium Silicide Charge Sweep Device (CSD) array containing more than 260,000 individual pixels manufactured by Mitsubishi Electric Co. The IR-5120A consists of a Camera Head. containing the CSD, a stirling cycle cooler and support electronics, and a Camera Control Unit containing the pixel fixed pattern noise corrector, video controllor, cooler driver and support power supplies. The stirling cycle cooler built into the Camera Head is used for keeping CSD temperature of approx. 80K with the features such as light weight, long life of more than 2000 hours and low acoustical noise. This paper describes an improved Thermal Imager, with more light weight, compact size and higher performance, and it's design philosophy, characteristics and field image.

  18. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  19. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  20. Characterization of Vegetation using the UC Davis Remote Sensing Testbed

    NASA Astrophysics Data System (ADS)

    Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.

    2006-12-01

    Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.

  1. Thermal regulation of tightly packed solid-state photodetectors in a 1 mm3 resolution clinical PET system

    PubMed Central

    Vandenbroucke, A.; Innes, D.; Lau, F. W. Y.; Hsu, D. F. C.; Reynolds, P. D.; Levin, Craig S.

    2015-01-01

    Purpose: Silicon photodetectors are of significant interest for use in positron emission tomography (PET) systems due to their compact size, insensitivity to magnetic fields, and high quantum efficiency. However, one of their main disadvantages is fluctuations in temperature cause strong shifts in gain of the devices. PET system designs with high photodetector density suffer both increased thermal density and constrained options for thermally regulating the devices. This paper proposes a method of thermally regulating densely packed silicon photodetectors in the context of a 1 mm3 resolution, high-sensitivity PET camera dedicated to breast imaging. Methods: The PET camera under construction consists of 2304 units, each containing two 8 × 8 arrays of 1 mm3 LYSO crystals coupled to two position sensitive avalanche photodiodes (PSAPD). A subsection of the proposed camera with 512 PSAPDs has been constructed. The proposed thermal regulation design uses water-cooled heat sinks, thermoelectric elements, and thermistors to measure and regulate the temperature of the PSAPDs in a novel manner. Active cooling elements, placed at the edge of the detector stack due to limited access, are controlled based on collective leakage current and temperature measurements in order to keep all the PSAPDs at a consistent temperature. This thermal regulation design is characterized for the temperature profile across the camera and for the time required for cooling changes to propagate across the camera. These properties guide the implementation of a software-based, cascaded proportional-integral-derivative control loop that controls the current through the Peltier elements by monitoring thermistor temperature and leakage current. The stability of leakage current, temperature within the system using this control loop is tested over a period of 14 h. The energy resolution is then measured over a period of 8.66 h. Finally, the consistency of PSAPD gain between independent operations of the camera over 10 days is tested. Results: The PET camera maintains a temperature of 18.00 ± 0.05 °C over the course of 12 h while the ambient temperature varied 0.61 °C, from 22.83 to 23.44 °C. The 511 keV photopeak energy resolution over a period of 8.66 h is measured to be 11.3% FWHM with a maximum photopeak fluctuation of 4 keV. Between measurements of PSAPD gain separated by at least 2 day, the maximum photopeak shift was 6 keV. Conclusions: The proposed thermal regulation scheme for tightly packed silicon photodetectors provides for stable operation of the constructed subsection of a PET camera over long durations of time. The energy resolution of the system is not degraded despite shifts in ambient temperature and photodetector heat generation. The thermal regulation scheme also provides a consistent operating environment between separate runs of the camera over different days. Inter-run consistency allows for reuse of system calibration parameters from study to study, reducing the time required to calibrate the system and hence to obtain a reconstructed image. PMID:25563270

  2. Thermal regulation of tightly packed solid-state photodetectors in a 1 mm{sup 3} resolution clinical PET system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freese, D. L.; Vandenbroucke, A.; Innes, D.

    2015-01-15

    Purpose: Silicon photodetectors are of significant interest for use in positron emission tomography (PET) systems due to their compact size, insensitivity to magnetic fields, and high quantum efficiency. However, one of their main disadvantages is fluctuations in temperature cause strong shifts in gain of the devices. PET system designs with high photodetector density suffer both increased thermal density and constrained options for thermally regulating the devices. This paper proposes a method of thermally regulating densely packed silicon photodetectors in the context of a 1 mm{sup 3} resolution, high-sensitivity PET camera dedicated to breast imaging. Methods: The PET camera under constructionmore » consists of 2304 units, each containing two 8 × 8 arrays of 1 mm{sup 3} LYSO crystals coupled to two position sensitive avalanche photodiodes (PSAPD). A subsection of the proposed camera with 512 PSAPDs has been constructed. The proposed thermal regulation design uses water-cooled heat sinks, thermoelectric elements, and thermistors to measure and regulate the temperature of the PSAPDs in a novel manner. Active cooling elements, placed at the edge of the detector stack due to limited access, are controlled based on collective leakage current and temperature measurements in order to keep all the PSAPDs at a consistent temperature. This thermal regulation design is characterized for the temperature profile across the camera and for the time required for cooling changes to propagate across the camera. These properties guide the implementation of a software-based, cascaded proportional-integral-derivative control loop that controls the current through the Peltier elements by monitoring thermistor temperature and leakage current. The stability of leakage current, temperature within the system using this control loop is tested over a period of 14 h. The energy resolution is then measured over a period of 8.66 h. Finally, the consistency of PSAPD gain between independent operations of the camera over 10 days is tested. Results: The PET camera maintains a temperature of 18.00 ± 0.05 °C over the course of 12 h while the ambient temperature varied 0.61 °C, from 22.83 to 23.44 °C. The 511 keV photopeak energy resolution over a period of 8.66 h is measured to be 11.3% FWHM with a maximum photopeak fluctuation of 4 keV. Between measurements of PSAPD gain separated by at least 2 day, the maximum photopeak shift was 6 keV. Conclusions: The proposed thermal regulation scheme for tightly packed silicon photodetectors provides for stable operation of the constructed subsection of a PET camera over long durations of time. The energy resolution of the system is not degraded despite shifts in ambient temperature and photodetector heat generation. The thermal regulation scheme also provides a consistent operating environment between separate runs of the camera over different days. Inter-run consistency allows for reuse of system calibration parameters from study to study, reducing the time required to calibrate the system and hence to obtain a reconstructed image.« less

  3. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  4. Status of the JWST Science Instrument Payload

    NASA Technical Reports Server (NTRS)

    Greenhouse, Matt

    2016-01-01

    The James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) system consists of five sensors (4 science): Mid-Infrared Instrument (MIRI), Near Infrared Imager and Slitless Spectrograph (NIRISS), Fine Guidance Sensor (FGS), Near InfraRed Camera (NIRCam), Near InfraRed Spectrograph (NIRSpec); and nine instrument support systems: Optical metering structure system, Electrical Harness System; Harness Radiator System, ISIM Electronics Compartment, ISIM Remote Services Unit, Cryogenic Thermal Control System, Command and Data Handling System, Flight Software System, Operations Scripts System.

  5. STS-109 Crew Interviews - Altman

    NASA Technical Reports Server (NTRS)

    2002-01-01

    STS-109 crew Commander Scott D. Altman is seen during a prelaunch interview. He answers questions about his inspiration to become an astronaut and his career path. He gives details on the mission's goals and significance, which are all related to maintenance of the Hubble Space Telescope (HST). After the Columbia Orbiter's rendezvous with the HST, extravehicular activities (EVA) will be focused on several important tasks which include: (1) installing the Advanced Camera for Surveys; (2) installing a cooling system on NICMOS (Near Infrared Camera Multi-Object Spectrometer); (3) repairing the reaction wheel assembly; (4) installing additional solar arrays; (5) augmenting the power control unit; (6) working on the HST's gyros. The reaction wheel assembly task, a late addition to the mission, may necessitate the abandonment of one or more of the other tasks, such as the gyro work.

  6. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  7. Explosive Transient Camera (ETC) Program

    DTIC Science & Technology

    1991-10-01

    VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149

  8. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  9. Development of a sensor coordinated kinematic model for neural network controller training

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented.

  10. A wide-angle camera module for disposable endoscopy

    NASA Astrophysics Data System (ADS)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  11. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    PubMed Central

    Lambers, Martin; Kolb, Andreas

    2017-01-01

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888

  12. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    PubMed

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  13. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  14. Improved CPAS Photogrammetric Capabilities for Engineering Development Unit (EDU) Testing

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.; Bretz, David R.

    2013-01-01

    This paper focuses on two key improvements to the photogrammetric analysis capabilities of the Capsule Parachute Assembly System (CPAS) for the Orion vehicle. The Engineering Development Unit (EDU) system deploys Drogue and Pilot parachutes via mortar, where an important metric is the muzzle velocity. This can be estimated using a high speed camera pointed along the mortar trajectory. The distance to the camera is computed from the apparent size of features of known dimension. This method was validated with a ground test and compares favorably with simulations. The second major photogrammetric product is measuring the geometry of the Main parachute cluster during steady-state descent using onboard cameras. This is challenging as the current test vehicles are suspended by a single-point attachment unlike earlier stable platforms suspended under a confluence fitting. The mathematical modeling of fly-out angles and projected areas has undergone significant revision. As the test program continues, several lessons were learned about optimizing the camera usage, installation, and settings to obtain the highest quality imagery possible.

  15. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  16. PC-based control unit for a head-mounted operating microscope for augmented-reality visualization in surgical navigation

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar

    2002-05-01

    Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.

  17. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-05

    Astronaut James H. Newman, mission specialist, floats about in the Space Shuttle Columbia's cargo bay while working in tandem with astronaut Michael J. Massimino (out of frame),mission specialist, during the STS-109 mission's second day of extravehicular activity (EVA). Inside Columbia's cabin, astronaut Nancy J. Currie, mission specialist, controlled the Remote Manipulator System (RMS) to assist the two in their work on the Hubble Space Telescope (HST). The RMS was used to capture the telescope and secure it into Columbia's cargo bay.Part of the giant telescope's base, latched down in the payload bay, can be seen behind Newman. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the HST. The Marshall Space Flight Center in Huntsville, Alabama had responsibility for the design, development, and contruction of the HST, which is the most powerful and sophisticated telescope ever built. STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.

  18. RM-10A robotic manipulator system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, J.R.; Coughlan, J.B.; Harvey, H.W.

    1988-01-01

    The REMOTE RM-10A is a man-replacement manipulator system that has been developed specifically for use in radioactive and other hazardous environments. It can be teleoperated, with man-in-the-loop, for unstructured tasks or programmed to perform routine tasks automatically much like robots in the automated manufacturing industry. The RM-10A is a servomanipulator utilizing a closed-loop, microprocessor-based control system. The system consists of a slave assembly, master control station, and interconnecting cabling. The slave assembly is the part of the system that enters the hostile environment. It is man-like is size and configuration with two identical arms attached to a torso structure. Eachmore » arm attaches to the torso using two captive screws and two guide pins. The guide pins position and stabilize an arm during removal and reinstallation and also align the two electrical connectors located in the arm support plate and torso. These features allow easy remote replacement of an arm, and commonality of the arms allow interchangeability. The water-resistant slave assembly is equipped with gaskets and O-ring seals in the torso and arm and camera assemblies. In addition, each slave arm's elbow, wrist, and tong are protected by replaceable polyurethane boots. An upper camera assembly, consisting of a color television (TV) camera, 6:1 zoom lens, and a pan/tilt unit, mount to the torso to provide remote viewing capability.« less

  19. Electro-optical system for gunshot detection: analysis, concept, and performance

    NASA Astrophysics Data System (ADS)

    Kastek, M.; Dulski, R.; Madura, H.; Trzaskawka, P.; Bieszczad, G.; Sosnowski, T.

    2011-08-01

    The paper discusses technical possibilities to build an effective electro-optical sensor unit for sniper detection using infrared cameras. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. At first, the analysis was presented of three distinguished phases of sniper activity: before, during and after the shot. On the basis of experimental data the parameters defining the relevant sniper signatures were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets and the descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. The analyzed infrared systems were simulated using NVTherm software. The calculations for several cameras, equipped with different lenses and detector types were performed. The simulation of detection ranges was performed for the selected scenarios of sniper detection tasks. After the analysis of simulation results, the technical specifications of infrared sniper detection system were discussed, required to provide assumed detection range. Finally the infrared camera setup was proposed which can detected sniper from 1000 meters range.

  20. Geocam Space: Enhancing Handheld Digital Camera Imagery from the International Space Station for Research and Applications

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Lee, Yeon Jin; Dille, Michael

    2016-01-01

    Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.

  1. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  2. A state observer for using a slow camera as a sensor for fast control applications

    NASA Astrophysics Data System (ADS)

    Gahleitner, Reinhard; Schagerl, Martin

    2013-03-01

    This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.

  3. Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.

    PubMed

    Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A

    2001-01-01

    This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.

  4. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  5. 19 CFR 210.39 - In camera treatment of confidential information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false In camera treatment of confidential information. 210.39 Section 210.39 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Prehearing Conferences and Hearings § 210...

  6. Remote presence proctoring by using a wireless remote-control videoconferencing system.

    PubMed

    Smith, C Daniel; Skandalakis, John E

    2005-06-01

    Remote presence in an operating room to allow an experienced surgeon to proctor a surgeon has been promised through robotics and telesurgery solutions. Although several such systems have been developed and commercialized, little progress has been made using telesurgery for anything more than live demonstrations of surgery. This pilot project explored the use of a new videoconferencing capability to determine if it offers advantages over existing systems. The video conferencing system used is a PC-based system with a flat screen monitor and an attached camera that is then mounted on a remotely controlled platform. This device is controlled from a remotely placed PC-based videoconferencing system computer outfitted with a joystick. Using the public Internet and a wireless router at the client site, a surgeon at the control station can manipulate the videoconferencing system. Controls include navigating the unit around the room and moving the flat screen/camera portion like a head looking up/down and right/left. This system (InTouch Medical, Santa Barbara, CA) was used to proctor medical students during an anatomy class cadaver dissection. The ability of the remote surgeon to effectively monitor the students' dissections and direct their activities was assessed subjectively by students and surgeon. This device was very effective at providing a controllable and interactive presence in the anatomy lab. Students felt they were interacting with a person rather than a video screen and quickly forgot that the surgeon was not in the room. The ability to move the device within the environment rather than just observe the environment from multiple fixed camera angles gave the surgeon a similar feel of true presence. A remote-controlled videoconferencing system provides a more real experience for both student and proctor. Future development of such a device could greatly facilitate progress in implementation of remote presence proctoring.

  7. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    NASA Astrophysics Data System (ADS)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  8. Astronaut Alan Bean flies the Astronaut Maneuvering Equipment

    NASA Image and Video Library

    1973-08-27

    SL3-107-1215 (27 Aug. 1973) --- Astronaut Alan L. Bean, Skylab 3 commander, flies the M509 Astronaut Maneuvering Equipment in the forward dome area of the Orbital Workshop (OWS) on the space station cluster in Earth orbit. One of his fellow crewmen took this photograph with a 35mm Nikon camera. Bean is strapped into the back mounted, hand-controlled Automatically Stabilized Maneuvering Unit (ASMU). The dome area is about 22 feet in diameter and 19 feet from top to bottom. Photo credit: NASA

  9. KSC-02pd1129

    NASA Image and Video Library

    2002-07-10

    KENNEDY SPACE CENTER, FLA. -- With the engines removed from Endeavour, the inside of Endeavour is exposed. At left center, Scott Minnick, with United Space Alliance, operates a fiber-optic camera inside the flow line. Other USA team members, right, watching the progress on a screen in front, are Gerry Kathka (with controls), Mike Fore and Peggy Ritchie. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.

  10. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  11. Simultaneous tracking and regulation visual servoing of wheeled mobile robots with uncalibrated extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo

    2018-01-01

    This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.

  12. Configuration of electro-optic fire source detection system

    NASA Astrophysics Data System (ADS)

    Fabian, Ram Z.; Steiner, Zeev; Hofman, Nir

    2007-04-01

    The recent fighting activities in various parts of the world have highlighted the need for accurate fire source detection on one hand and fast "sensor to shooter cycle" capabilities on the other. Both needs can be met by the SPOTLITE system which dramatically enhances the capability to rapidly engage hostile fire source with a minimum of casualties to friendly force and to innocent bystanders. Modular system design enable to meet each customer specific requirements and enable excellent future growth and upgrade potential. The design and built of a fire source detection system is governed by sets of requirements issued by the operators. This can be translated into the following design criteria: I) Long range, fast and accurate fire source detection capability. II) Different threat detection and classification capability. III) Threat investigation capability. IV) Fire source data distribution capability (Location, direction, video image, voice). V) Men portability. ) In order to meet these design criteria, an optimized concept was presented and exercised for the SPOTLITE system. Three major modular components were defined: I) Electro Optical Unit -Including FLIR camera, CCD camera, Laser Range Finder and Marker II) Electronic Unit -including system computer and electronic. III) Controller Station Unit - Including the HMI of the system. This article discusses the system's components definition and optimization processes, and also show how SPOTLITE designers successfully managed to introduce excellent solutions for other system parameters.

  13. Albumin testing in urine using a smart-phone

    PubMed Central

    Coskun, Ahmet F.; Nagi, Richie; Sadeghi, Kayvon; Phillips, Stephen; Ozcan, Aydogan

    2013-01-01

    We demonstrate a digital sensing platform, termed Albumin Tester, running on a smart-phone that images and automatically analyses fluorescent assays confined within disposable test tubes for sensitive and specific detection of albumin in urine. This light-weight and compact Albumin Tester attachment, weighing approximately 148 grams, is mechanically installed on the existing camera unit of a smart-phone, where test and control tubes are inserted from the side and are excited by a battery powered laser diode. This excitation beam, after probing the sample of interest located within the test tube, interacts with the control tube, and the resulting fluorescent emission is collected perpendicular to the direction of the excitation, where the cellphone camera captures the images of the fluorescent tubes through the use of an external plastic lens that is inserted between the sample and the camera lens. The acquired fluorescent images of the sample and control tubes are digitally processed within one second through an Android application running on the same cellphone for quantification of albumin concentration in urine specimen of interest. Using a simple sample preparation approach which takes ~ 5 minutes per test (including the incubation time), we experimentally confirmed the detection limit of our sensing platform as 5–10 μg/mL (which is more than 3 times lower than clinically accepted normal range) in buffer as well as urine samples. This automated albumin testing tool running on a smart-phone could be useful for early diagnosis of kidney disease or for monitoring of chronic patients, especially those suffering from diabetes, hypertension, and/or cardiovascular diseases. PMID:23995895

  14. [Environmental Education Units.] Photography for Kids. Vacant Lot Studies. Contour Mapping.

    ERIC Educational Resources Information Center

    Minneapolis Independent School District 275, Minn.

    Techniques suitable for use with elementary school students when studying field environment are described in these four booklets. Techniques for photography (construction of simple cameras, printing on blueprint and photographic paper, use of simple commercial cameras, development of exposed film); for measuring microclimatic factors (temperature,…

  15. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-07

    STS-109 Astronaut Michael J. Massimino, mission specialist, perched on the Shuttle's robotic arm, is preparing to install the Electronic Support Module (ESM) in the aft shroud of the Hubble Space telescope (HST), with the assistance of astronaut James H. Newman (out of frame). The module will support a new experimental cooling system to be installed during the next day's fifth and final space walk of the mission. That cooling system is designed to bring the telescope's Near-Infrared Camera and Multi Spectrometer (NICMOS) back to life the which had been dormant since January 1999 when its original coolant ran out. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. In addition to the installation of the experimental cooling system for the Hubble's Near-Infrared Camera and NICMOS, STS-109 upgrades to the HST included replacement of the solar array panels, replacement of the power control unit (PCU), and replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS). Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.

  16. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  17. The effect of infection-control barriers on the light intensity of light-cure units and depth of cure of composite.

    PubMed

    Hodson, Nicholas A; Dunne, Stephen M; Pankhurst, Caroline L

    2005-04-01

    Dental curing lights are vulnerable to contamination with oral fluids during routine intra-oral use. This controlled study aimed to evaluate whether or not disposable transparent barriers placed over the light-guide tip would affect light output intensity or the subsequent depth of cure of a composite restoration. The impact on light intensity emitted from high-, medium- and low-output light-cure units in the presence of two commercially available disposable infection-control barriers was evaluated against a no-barrier control. Power density measurements from the three intensity light-cure units were recorded with a radiometer, then converted to a digital image using an intra-oral camera and values determined using a commercial computer program. For each curing unit, the measurements were repeated on ten separate occasions with each barrier and the control. Depth of cure was evaluated using a scrape test in a natural tooth model. At each level of light output, the two disposable barriers produced a significant reduction in the mean power density readings compared to the no-barrier control (P<0.005). The cure sleeve inhibited light output to a greater extent than either the cling film or the control (P<0.005). Only composite restorations light-activated by the high level unit demonstrated a small but significant decrease in the depth of cure compared to the control (P<0.05). Placing disposable barriers over the light-guide tip reduced the light intensity from all three curing lights. There was no impact on depth of cure except for the high-output light, where a small decrease in cure depth was noted but this was not considered clinically significant. Disposable barriers can be recommended for use with light-cure lights.

  18. Accuracy of Wearable Cameras to Track Social Interactions in Stroke Survivors.

    PubMed

    Dhand, Amar; Dalton, Alexandra E; Luke, Douglas A; Gage, Brian F; Lee, Jin-Moo

    2016-12-01

    Social isolation after a stroke is related to poor outcomes. However, a full study of social networks on stroke outcomes is limited by the current metrics available. Typical measures of social networks rely on self-report, which is vulnerable to response bias and measurement error. We aimed to test the accuracy of an objective measure-wearable cameras-to capture face-to-face social interactions in stroke survivors. If accurate and usable in real-world settings, this technology would allow improved examination of social factors on stroke outcomes. In this prospective study, 10 stroke survivors each wore 2 wearable cameras: Autographer (OMG Life Limited, Oxford, United Kingdom) and Narrative Clip (Narrative, Linköping, Sweden). Each camera automatically took a picture every 20-30 seconds. Patients mingled with healthy controls for 5 minutes of 1-on-1 interactions followed by 5 minutes of no interaction for 2 hours. After the event, 2 blinded judges assessed whether photograph sequences identified interactions or noninteractions. Diagnostic accuracy statistics were calculated. A total of 8776 photographs were taken and adjudicated. In distinguishing interactions, the Autographer's sensitivity was 1.00 and specificity was .98. The Narrative Clip's sensitivity was .58 and specificity was 1.00. The receiver operating characteristic curves of the 2 devices were statistically different (Z = 8.26, P < .001). Wearable cameras can accurately detect social interactions of stroke survivors. Likely because of its large field of view, the Autographer was more sensitive than the Narrative Clip for this purpose. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  19. STS-109 Crew Interviews: Michael J. Massimino

    NASA Technical Reports Server (NTRS)

    2002-01-01

    STS-109 Mission Specialist Michael J. Massimino is seen during a prelaunch interview. He answers questions about his inspiration to become an astronaut, his career path, and his most memorable experiences. He gives details on the mission's goals and objectives, which focus on the refurbishing of the Hubble Space Telescope, and his role in the mission. He explains the plans for the rendezvous of the Columbia Orbiter with the Hubble Space Telescope. He provides details and timelines for each of the planned Extravehicular Activities (EVAs), which include replacing the solar arrays, changing the Power Control Unit, installing the Advanced Camera for Surveys (ACS), and installing a new Cryocooler for the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). He also describes the break-out plan in place for these spacewalks. The interview ends with Massimino explaining the details of a late addition to the mission's tasks, which is to replace a reaction wheel on the Hubble Space Telescope.

  20. STS-109 Crew Interviews - Currie

    NASA Technical Reports Server (NTRS)

    2002-01-01

    STS-109 Mission Specialist 2 Nancy Jane Currie is seen during a prelaunch interview. She answers questions about her inspiration to become an astronaut and her career path. She gives details on the Columbia Orbiter mission which has as its main tasks the maintenance and augmentation of the Hubble Space Telescope (HST). While she will do many things during the mission, the most important will be her role as the primary operator of the robotic arm, which is responsible for grappling the HST, bringing it to the Orbiter bay, and providing support for the astronauts during their EVAs (Extravehicular Activities). Additionally, the robotic arm will be responsible for transferring new and replacement equipment from the Orbiter to the HST. This equipment includes: two solar arrays, a Power Control Unit (PCU), the Advanced Camera for Surveys, and a replacement cooling system for NICMOS (Near Infrared Camera Multi-Object Spectrometer).

  1. Ingestible wireless capsules for enhanced diagnostic inspection of gastrointestinal tract

    NASA Astrophysics Data System (ADS)

    Rasouli, Mahdi; Kencana, Andy Prima; Huynh, Van An; Ting, Eng Kiat; Lai, Joshua Chong Yue; Wong, Kai Juan; Tan, Su Lim; Phee, Soo Jay

    2011-03-01

    Wireless capsule endoscopy has become a common procedure for diagnostic inspection of gastrointestinal tract. This method offers a less-invasive alternative to traditional endoscopy by eliminating uncomfortable procedures of the traditional endoscopy. Moreover, it provides the opportunity for exploring inaccessible areas of the small intestine. Current capsule endoscopes, however, move by peristalsis and are not capable of detailed and on-demand inspection of desired locations. Here, we propose and develop two wireless endoscopes with maneuverable vision systems to enhance diagnosis of gastrointestinal disorders. The vision systems in these capsules are equipped with mechanical actuators to adjust the position of the camera. This may help to cover larger areas of the digestive tract and investigate desired locations. The preliminary experimental results showed that the developed platform could successfully communicate with the external control unit via human body and adjust the position of camera to limited degrees.

  2. Lessons Learned from the Wide Field Camera 3 TV1 Test Campaign and Correlation Effort

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Stavley, Richard; Bast, William

    2007-01-01

    In January 2004, shortly after the Columbia accident, future servicing missions to the Hubble Space Telescope (HST) were cancelled. In response to this, further work on the Wide Field Camera 3 instrument was ceased. Given the maturity level of the design, a characterization thermal test (TV1) was completed in case the mission was re-instated or an alternate mission found on which to fly the instrument. This thermal test yielded some valuable lessons learned with respect to testing configurations and modeling/correlation practices, including: 1. Ensure that the thermal design can be tested 2. Ensure that the model has sufficient detail for accurate predictions 3. Ensure that the power associated with all active control devices is predicted 4. Avoid unit changes for existing models. This paper documents the difficulties presented when these recommendations were not followed.

  3. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA) (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, USA, (behind the stand) and NASA’s Richard Parker (seated) watch the images on a monitor to inspect for corrosion.

    NASA Image and Video Library

    2003-09-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA) (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, USA, (behind the stand) and NASA’s Richard Parker (seated) watch the images on a monitor to inspect for corrosion.

  4. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, NASA’s Richard Parker (below left) and Peggy Ritchie, with USA, (at right) watch the images on a monitor to inspect for corrosion.

    NASA Image and Video Library

    2003-09-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, NASA’s Richard Parker (below left) and Peggy Ritchie, with USA, (at right) watch the images on a monitor to inspect for corrosion.

  5. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, with USA, (behind the stand) and NASA’s Richard Parker watch the images on a monitor to inspect for corrosion.

    NASA Image and Video Library

    2003-09-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, with USA, (behind the stand) and NASA’s Richard Parker watch the images on a monitor to inspect for corrosion.

  6. People detection method using graphics processing units for a mobile robot with an omnidirectional camera

    NASA Astrophysics Data System (ADS)

    Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki

    2011-12-01

    This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.

  7. Layers and Dark Dunes

    NASA Image and Video Library

    2015-04-08

    The target of this observation as seen by ASA Mars Reconnaissance Orbiter is a circular depression in a dark-toned unit associated with a field of cones to the northeast. At the image scale of a Context Camera image, the depression appears to expose layers especially on the sides or walls of the depression, which are overlain by dark sands presumably associated with the dark-toned unit. HiRISE resolution, which is far higher than that of the Context Camera and its larger footprint, can help identify possible layers. http://photojournal.jpl.nasa.gov/catalog/PIA19358

  8. Two bright fireballs over Great Britain

    NASA Astrophysics Data System (ADS)

    Koukal, Jakub; Káčerek, Richard

    2018-02-01

    On November 24, 2017 shortly before midnight and on November 25, 2017 shortly before sunrise, two very bright fireballs lit up the sky over the United Kingdom. The UKMON (United Kingdom Meteor Observation Network) cameras and onboard cameras in the automobiles recorded their flight. The fireballs paths in the Earth's atmosphere were calculated, as well as the orbits of bodies in the Solar System. The flight of both bodies, the absolute magnitude of which approached the brightness of the full Moon, was also observed by numerous random observers from the public in Great Britain, Ireland and France.

  9. Concept of electro-optical sensor module for sniper detection system

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz

    2010-10-01

    The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.

  10. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  11. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    NASA Astrophysics Data System (ADS)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  12. Development of a camera casing suited for cryogenic and vacuum applications

    NASA Astrophysics Data System (ADS)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  13. Comparison of the temperature accuracy between smart phone based and high-end thermal cameras using a temperature gradient phantom

    NASA Astrophysics Data System (ADS)

    Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.

    2017-03-01

    Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.

  14. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-07

    STS-109 Astronaut Michael J. Massimino, mission specialist, perched on the Shuttle's robotic arm is working at the stowage area for the Hubble Space Telescope's port side solar array. Working in tandem with James. H. Newman, Massimino removed the old port solar array and stored it in Columbia's payload bay for return to Earth. The two went on to install a third generation solar array and its associated electrical components. Two crew mates had accomplished the same feat with the starboard array on the previous day. In addition to the replacement of the solar arrays, the STS-109 crew also installed the experimental cooling system for the Hubble's Near-Infrared Camera (NICMOS), replaced the power control unit (PCU), and replaced the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS). The 108th flight overall in NASA's Space Shuttle Program, the Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 for 10 days, 22 hours, and 11 minutes. Five space walks were conducted to complete the HST upgrades. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built.

  15. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-03

    The Hubble Space Telescope (HST), with its normal routine temporarily interrupted, is about to be captured by the Space Shuttle Columbia prior to a week of servicing and upgrading by the STS-109 crew. The telescope was captured by the shuttle's Remote Manipulator System (RMS) robotic arm and secured on a work stand in Columbia's payload bay where 4 of the 7-member crew performed 5 space walks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. Launched March 1, 2002, the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  16. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-05

    STS-109 Astronauts Michael J. Massimino and James H. Newman were making their second extravehicular activity (EVA) of their mission when astronaut Massimino, mission specialist, peered into Columbia's crew cabin during a brief break from work on the Hubble Space Telescope (HST). The HST is latched down just a few feet behind him in Columbia's cargo bay. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.

  17. STS-109 Astronaut Michael J. Massimino Peers Into Window of Shuttle During EVA

    NASA Technical Reports Server (NTRS)

    2002-01-01

    STS-109 Astronauts Michael J. Massimino and James H. Newman were making their second extravehicular activity (EVA) of their mission when astronaut Massimino, mission specialist, peered into Columbia's crew cabin during a brief break from work on the Hubble Space Telescope (HST). The HST is latched down just a few feet behind him in Columbia's cargo bay. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.

  18. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  19. Testing and Performance Validation of a Sensitive Gamma Ray Camera Designed for Radiation Detection and Decommissioning Measurements in Nuclear Facilities-13044

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, John A.; Looman, Marc R.; Poundall, Adam J.

    2013-07-01

    This paper describes the measurements, testing and performance validation of a sensitive gamma ray camera designed for radiation detection and quantification in the environment and decommissioning and hold-up measurements in nuclear facilities. The instrument, which is known as RadSearch, combines a sensitive and highly collimated LaBr{sub 3} scintillation detector with an optical (video) camera with controllable zoom and focus and a laser range finder in one detector head. The LaBr{sub 3} detector has a typical energy resolution of between 2.5% and 3% at the 662 keV energy of Cs-137 compared to that of NaI detectors with a resolution of typicallymore » 7% to 8% at the same energy. At this energy the tungsten shielding of the detector provides a shielding ratio of greater than 900:1 in the forward direction and 100:1 on the sides and from the rear. The detector head is mounted on a pan/tile mechanism with a range of motion of ±180 degrees (pan) and ±90 degrees (tilt) equivalent to 4 π steradians. The detector head with pan/tilt is normally mounted on a tripod or wheeled cart. It can also be mounted on vehicles or a mobile robot for access to high dose-rate areas and areas with high levels of contamination. Ethernet connects RadSearch to a ruggedized notebook computer from which it is operated and controlled. Power can be supplied either as 24-volts DC from a battery or as 50 volts DC supplied by a small mains (110 or 230 VAC) power supply unit that is co-located with the controlling notebook computer. In this latter case both power and Ethernet are supplied through a single cable that can be up to 80 metres in length. If a local battery supplies power, the unit can be controlled through wireless Ethernet. Both manual operation and automatic scanning of surfaces and objects is available through the software interface on the notebook computer. For each scan element making up a part of an overall scanned area, the unit measures a gamma ray spectrum. Multiple radionuclides may be selected by the operator and will be identified if present. In scanning operation the unit scans a designated region and superimposes over a video image the distribution of measured radioactivity. For the total scanned area or object RadSearch determines the total activity of operator selected radionuclides present and the gamma dose-rate measured at the detector head. Results of hold-up measurements made in a nuclear facility are presented, as are test measurements of point sources distributed arbitrarily on surfaces. These latter results are compared with the results of benchmarked MCNP Monte Carlo calculations. The use of the device for hold-up and decommissioning measurements is validated. (authors)« less

  20. Television Cameras in Congress. Freedom of Information Center Report No. 483.

    ERIC Educational Resources Information Center

    Watt, Phyllis

    While the United States Senate debates the merits of televising its proceedings, it might consider as a model the House of Representatives, which has televised floor activities since 1979 with no dramatic changes in those activities or in members' behavior. The House system consists of inconspicuously placed cameras and microphones operated by…

  1. Geomorphologic mapping of the lunar crater Tycho and its impact melt deposits

    NASA Astrophysics Data System (ADS)

    Krüger, T.; van der Bogert, C. H.; Hiesinger, H.

    2016-07-01

    Using SELENE/Kaguya Terrain Camera and Lunar Reconnaissance Orbiter Camera (LROC) data, we produced a new, high-resolution (10 m/pixel), geomorphological and impact melt distribution map for the lunar crater Tycho. The distal ejecta blanket and crater rays were investigated using LROC wide-angle camera (WAC) data (100 m/pixel), while the fine-scale morphologies of individual units were documented using high resolution (∼0.5 m/pixel) LROC narrow-angle camera (NAC) frames. In particular, Tycho shows a large coherent melt sheet on the crater floor, melt pools and flows along the terraced walls, and melt pools on the continuous ejecta blanket. The crater floor of Tycho exhibits three distinct units, distinguishable by their elevation and hummocky surface morphology. The distribution of impact melt pools and ejecta, as well as topographic asymmetries, support the formation of Tycho as an oblique impact from the W-SW. The asymmetric ejecta blanket, significantly reduced melt emplacement uprange, and the depressed uprange crater rim at Tycho suggest an impact angle of ∼25-45°.

  2. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    NASA Technical Reports Server (NTRS)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  3. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  4. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  5. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  6. Murine fundus fluorescein angiography: An alternative approach using a handheld camera.

    PubMed

    Ehrenberg, Moshe; Ehrenberg, Scott; Schwob, Ouri; Benny, Ofra

    2016-07-01

    In today's modern pharmacologic approach to treating sight-threatening retinal vascular disorders, there is an increasing demand for a compact, mobile, lightweight and cost-effective fluorescein fundus camera to document the effects of antiangiogenic drugs on laser-induced choroidal neovascularization (CNV) in mice and other experimental animals. We have adapted the use of the Kowa Genesis Df Camera to perform Fundus Fluorescein Angiography (FFA) in mice. The 1 kg, 28 cm high camera has built-in barrier and exciter filters to allow digital FFA recording to a Compact Flash memory card. Furthermore, this handheld unit has a steady Indirect Lens Holder that firmly attaches to the main unit, that securely holds a 90 diopter lens in position, in order to facilitate appropriate focus and stability, for photographing the delicate central murine fundus. This easily portable fundus fluorescein camera can effectively record exceptional central retinal vascular detail in murine laser-induced CNV, while readily allowing the investigator to adjust the camera's position according to the variable head and eye movements that can randomly occur while the mouse is optimally anesthetized. This movable image recording device, with efficiencies of space, time, cost, energy and personnel, has enabled us to accurately document the alterations in the central choroidal and retinal vasculature following induction of CNV, implemented by argon-green laser photocoagulation and disruption of Bruch's Membrane, in the experimental murine model of exudative macular degeneration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The TESS camera: modeling and measurements with deep depletion devices

    NASA Astrophysics Data System (ADS)

    Woods, Deborah F.; Vanderspek, Roland; MacDonald, Robert; Morgan, Edward; Villasenor, Joel; Thayer, Carolyn; Burke, Barry; Chesbrough, Christian; Chrisp, Michael; Clark, Kristin; Furesz, Gabor; Gonzales, Alexandria; Nguyen, Tam; Prigozhin, Gregory; Primeau, Brian; Ricker, George; Sauerwein, Timothy; Suntharalingam, Vyshnavi

    2016-07-01

    The Transiting Exoplanet Survey Satellite, a NASA Explorer-class mission in development, will discover planets around nearby stars, most notably Earth-like planets with potential for follow up characterization. The all-sky survey requires a suite of four wide field-of-view cameras with sensitivity across a broad spectrum. Deep depletion CCDs with a silicon layer of 100 μm thickness serve as the camera detectors, providing enhanced performance in the red wavelengths for sensitivity to cooler stars. The performance of the camera is critical for the mission objectives, with both the optical system and the CCD detectors contributing to the realized image quality. Expectations for image quality are studied using a combination of optical ray tracing in Zemax and simulations in Matlab to account for the interaction of the incoming photons with the 100 μm silicon layer. The simulations include a probabilistic model to determine the depth of travel in the silicon before the photons are converted to photo-electrons, and a Monte Carlo approach to charge diffusion. The charge diffusion model varies with the remaining depth for the photo-electron to traverse and the strength of the intermediate electric field. The simulations are compared with laboratory measurements acquired by an engineering unit camera with the TESS optical design and deep depletion CCDs. In this paper we describe the performance simulations and the corresponding measurements taken with the engineering unit camera, and discuss where the models agree well in predicted trends and where there are differences compared to observations.

  8. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  9. New method for obtaining position and time structure of source in HDR remote afterloading brachytherapy unit utilizing light emission from scintillator

    PubMed Central

    Hanada, Takashi; Katsuta, Shoichi; Yorozu, Atsunori; Maruyama, Koichi

    2009-01-01

    When using a HDR remote afterloading brachytherapy unit, results of treatment can be greatly influenced by both source position and treatment time. The purpose of this study is to obtain information on the source of the HDR remote afterloading unit, such as its position and time structure, with the use of a simple system consisting of a plastic scintillator block and a charge‐coupled device (CCD) camera. The CCD camera was used for recording images of scintillation luminescence at a fixed rate of 30 frames per second in real time. The source position and time structure were obtained by analyzing the recorded images. For a preset source‐step‐interval of 5 mm, the measured value of the source position was 5.0±1.0mm, with a pixel resolution of 0.07 mm in the recorded images. For a preset transit time of 30 s, the measured value was 30.0±0.6 s, when the time resolution of the CCD camera was 1/30 s. This system enabled us to obtain the source dwell time and movement time. Therefore, parameters such as I192r source position, transit time, dwell time, and movement time at each dwell position can be determined quantitatively using this plastic scintillator‐CCD camera system. PACS number: 87.53.Jw

  10. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  11. Simulation-based camera navigation training in laparoscopy-a randomized trial.

    PubMed

    Nilsson, Cecilia; Sorensen, Jette Led; Konge, Lars; Westen, Mikkel; Stadeager, Morten; Ottesen, Bent; Bjerrum, Flemming

    2017-05-01

    Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera navigation skills during a laparoscopic cholecystectomy. The secondary outcome was technical skills after training, using a previously developed model for testing camera navigational skills. The exploratory outcome measured participants' motivation toward the task as an operating assistant. Thirty-six participants were randomized. No significant difference was found in the primary outcome between the three groups (p = 0.279). The secondary outcome showed no significant difference between the interventions groups, total time 167 s (95% CI, 118-217) and 194 s (95% CI, 152-236) for the camera group and the procedure group, respectively (p = 0.369). Both interventions groups were significantly faster than the control group, 307 s (95% CI, 202-412), p = 0.018 and p = 0.045, respectively. On the exploratory outcome, the control group for two dimensions, interest/enjoyment (p = 0.030) and perceived choice (p = 0.033), had a higher score. Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher interest/enjoyment and perceived choice than the camera group.

  12. RealityFlythrough: Enhancing Situational Awareness for Medical Response to Disasters Using Ubiquitous Video

    PubMed Central

    McCurdy, Neil J.; Griswold, William G; Lenert, Leslie A.

    2005-01-01

    The first moments at a disater scene are chaotic. The command center initially operates with little knowledge of hazards, geography and casualties, building up knowledge of the event slowly as information trickles in by voice radio channels. RealityFlythrough is a tele-presence system that stitches together live video feeds in real-time, using the principle of visual closure, to give command center personnel the illusion of being able to explore the scene interactively by moving smoothly between the video feeds. Using RealityFlythrough, medical, fire, law enforcement, hazardous materials, and engineering experts may be able to achieve situational awareness earlier, and better manage scarce resources. The RealityFlythrough system is composed of camera units with off-the-shelf GPS and orientation systems and a server/viewing station that offers access to images collected by the camera units in real time by position/orientation. In initial field testing using an experimental mesh 802.11 wireless network, two camera unit operators were able to create an interactive image of a simulated disaster scene in about five minutes. PMID:16779092

  13. International testing of a Mars rover prototype

    NASA Astrophysics Data System (ADS)

    Kemurjian, Alexsandr Leonovich; Linkin, V.; Friedman, L.

    1993-03-01

    Tests on a prototype engineering model of the Russian Mars 96 Rover were conducted by an international team in and near Death Valley in the United States in late May, 1992. These tests were part of a comprehensive design and testing program initiated by the three Russian groups responsible for the rover development. The specific objectives of the May tests were: (1) evaluate rover performance over different Mars-like terrains; (2) evaluate state-of-the-art teleoperation and autonomy development for Mars rover command, control and navigation; and (3) organize an international team to contribute expertise and capability on the rover development for the flight project. The range and performance that can be planned for the Mars mission is dependent on the degree of autonomy that will be possible to implement on the mission. Current plans are for limited autonomy, with Earth-based teleoperation for the nominal navigation system. Several types of television systems are being investigated for inclusion in the navigation system including panoramic camera, stereo, and framing cameras. The tests used each of these in teleoperation experiments. Experiments were included to consider use of such TV data in autonomy algorithms. Image processing and some aspects of closed-loop control software were also tested. A micro-rover was tested to help consider the value of such a device as a payload supplement to the main rover. The concept is for the micro-rover to serve like a mobile hand, with its own sensors including a television camera.

  14. Flexible mobile robot system for smart optical pipe inspection

    NASA Astrophysics Data System (ADS)

    Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang

    1998-03-01

    Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.

  15. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  16. An arc control and protection system for the JET lower hybrid antenna based on an imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueiredo, J., E-mail: joao.figueiredo@jet.efda.org; Mailloux, J.; Kirov, K.

    Arcs are the potentially most dangerous events related to Lower Hybrid (LH) antenna operation. If left uncontrolled they can produce damage and cause plasma disruption by impurity influx. To address this issue an arc real time control and protection imaging system for the Joint European Torus (JET) LH antenna has been implemented. The LH system is one of the additional heating systems at JET. It comprises 24 microwave generators (klystrons, operating at 3.7 GHz) providing up to 5 MW of heating and current drive to the JET plasma. This is done through an antenna composed of an array of waveguidesmore » facing the plasma. The protection system presented here is based primarily on an imaging arc detection and real time control system. It has adapted the ITER like wall hotspot protection system using an identical CCD camera and real time image processing unit. A filter has been installed to avoid saturation and spurious system triggers caused by ionization light. The antenna is divided in 24 Regions Of Interest (ROIs) each one corresponding to one klystron. If an arc precursor is detected in a ROI, power is reduced locally with subsequent potential damage and plasma disruption avoided. The power is subsequently reinstated if, during a defined interval of time, arcing is confirmed not to be present by image analysis. This system was successfully commissioned during the restart phase and beginning of the 2013 scientific campaign. Since its installation and commissioning, arcs and related phenomena have been prevented. In this contribution we briefly describe the camera, image processing, and real time control systems. Most importantly, we demonstrate that an LH antenna arc protection system based on CCD camera imaging systems works. Examples of both controlled and uncontrolled LH arc events and their consequences are shown.« less

  17. ChemCam Mast Unit Being Prepared for Laser Firing

    NASA Image and Video Library

    2010-12-23

    Researchers prepare for a test of the Chemistry and Camera ChemCam instrument that will fly on NASA Mars Science Laboratory mission; researchers are preparing the instrument mast unit for a laser firing test.

  18. Stereo and photometric image sequence interpretation for detecting negative obstacles using active gaze control and performing an autonomous jink

    NASA Astrophysics Data System (ADS)

    Hofmann, Ulrich; Siedersberger, Karl-Heinz

    2003-09-01

    Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.

  19. Inventory of terrestrial mammals in the Rincon Mountains using camera traps

    Treesearch

    Don E. Swann; Nic Perkins

    2013-01-01

    The Sky Island region of the southwestern United States and northwestern Mexico is well-known for its diversity of mammals, including endemic species and species representing several different biogeographic provinces. Camera trap studies have provided important insight into mammalian distribution and diversity in the Sky Islands in recent years, but few studies have...

  20. LOFT. Interior, control room in control building (TAN630). Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT. Interior, control room in control building (TAN-630). Camera facing north. Sign says "This control console is partially active. Do not operate any switch handle without authorization." Date: May 2004. INEEL negative no. HD-39-14-3 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  1. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  2. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  3. So you want to conduct a randomised trial? Learnings from a 'failed' feasibility study of a Crisis Resource Management prompt during simulated paediatric resuscitation.

    PubMed

    Teis, Rachel; Allen, Jyai; Lee, Nigel; Kildea, Sue

    2017-02-01

    No study has tested a Crisis Resource Management prompt on resuscitation performance. We conducted a feasibility, unblinded, parallel-group, randomised controlled trial at one Australian paediatric hospital (June-September 2014). Eligible participants were any doctor, nurse, or nurse manager who would normally be involved in a Medical Emergency Team simulation. The unit of block randomisation was one of six scenarios (3 control:3 intervention) with or without a verbal prompt. The primary outcomes tested the feasibility and utility of the intervention and data collection tools. The secondary outcomes measured resuscitation quality and team performance. Data were analysed from six resuscitation scenarios (n=49 participants); three control groups (n=25) and three intervention groups (n=24). The ability to measure all data items on the data collection tools was hindered by problems with the recording devices both in the mannequins and the video camera. For a pilot study, greater training for the prompt role and pre-briefing participants about assessment of their cardio-pulmonary resuscitation quality should be undertaken. Data could be analysed in real time with independent video analysis to validate findings. Two cameras would strengthen reliability of the methods. Copyright © 2016 College of Emergency Nursing Australasia. Published by Elsevier Ltd. All rights reserved.

  4. In-flight photogrammetric camera calibration and validation via complementary lidar

    NASA Astrophysics Data System (ADS)

    Gneeniss, A. S.; Mills, J. P.; Miller, P. E.

    2015-02-01

    This research assumes lidar as a reference dataset against which in-flight camera system calibration and validation can be performed. The methodology utilises a robust least squares surface matching algorithm to align a dense network of photogrammetric points to the lidar reference surface, allowing for the automatic extraction of so-called lidar control points (LCPs). Adjustment of the photogrammetric data is then repeated using the extracted LCPs in a self-calibrating bundle adjustment with additional parameters. This methodology was tested using two different photogrammetric datasets, a Microsoft UltraCamX large format camera and an Applanix DSS322 medium format camera. Systematic sensitivity testing explored the influence of the number and weighting of LCPs. For both camera blocks it was found that when the number of control points increase, the accuracy improves regardless of point weighting. The calibration results were compared with those obtained using ground control points, with good agreement found between the two.

  5. Astronaut Alan Bean flies the Astronaut Maneuvering Equipment in the OWS

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Astronaut Alan L. Bean, Skylab 3 commander, flies the M509 Astronaut Maneuvering Equipment, as seen in this photographic reproduction taken from a television transmission made by a color television camera in the Orbital Workshop (OWS) of the Skylab space station in Earth orbit. Bean is strapped into the back-mounted, hand-controlled Automatically stabilized Maneuvering Unit (ASMU). The M509 exercise was in the forward dome area of the OWS. THe dome area is about 22 feet in diameter and 19 feet form top to bottom.

  6. Video stroke assessment (VSA) project: design and production of a prototype system for the remote diagnosis of stroke

    NASA Astrophysics Data System (ADS)

    Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.

    2005-04-01

    Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.

  7. Impact of autofluorescence-based identification of parathyroids during total thyroidectomy on postoperative hypocalcemia: a before and after controlled study.

    PubMed

    Benmiloud, Fares; Rebaudet, Stanislas; Varoquaux, Arthur; Penaranda, Guillaume; Bannier, Marie; Denizot, Anne

    2018-01-01

    The clinical impact of intraoperative autofluorescence-based identification of parathyroids using a near-infrared camera remains unknown. In a before and after controlled study, we compared all patients who underwent total thyroidectomy by the same surgeon during Period 1 (January 2015 to January 2016) without near-infrared (near-infrared- group) and those operated on during Period 2 (February 2016 to September 2016) using a near-infrared camera (near-infrared+ group). In parallel, we also compared all patients who underwent surgery without near-infrared during those same periods by another surgeon in the same unit (control groups). Main outcomes included postoperative hypocalcemia, parathyroid identification, autotransplantation, and inadvertent resection. The near-infrared+ group displayed significantly lower postoperative hypocalcemia rates (5.2%) than the near-infrared- group (20.9%; P < .001). Compared with the near-infrared- patients, the near-infrared+ group exhibited an increased mean number of identified parathyroids and reduced parathyroid autotransplantation rates, although no difference was observed in inadvertent resection rates. Parathyroids were identified via near-infrared before they were visualized by the surgeon in 68% patients. In the control groups, parathyroid identification improved significantly from Period 1 to Period 2, although autotransplantation, inadvertent resection and postoperative hypocalcemia rates did not differ. Near-infrared use during total thyroidectomy significantly reduced postoperative hypocalcemia, improved parathyroid identification and reduced their autotransplantation rate. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Design and Implementation of Multifunctional Automatic Drilling End Effector

    NASA Astrophysics Data System (ADS)

    Wang, Zhanxi; Qin, Xiansheng; Bai, Jing; Tan, Xiaoqun; Li, Jing

    2017-03-01

    In order to realize the automatic drilling in aircraft assembly, a drilling end effector is designed by integrating the pressure unit, drilling unit, measurement unit, control system and frame structure. In order to reduce the hole deviation, this paper proposes a vertical normal adjustment program based on 4 laser distance sensors. The actual normal direction of workpiece surface can be calculated through the sensors measurements, and then robot posture is adjusted to realize the hole deviation correction. A base detection method is proposed to detect and locate the hole automatically by using the camera and the reference hole. The experiment results show that the position accuracy of the system is less than 0.3mm, and the normal precision is less than 0.5°. The drilling end effector and robot can greatly improve the efficiency of the aircraft parts and assembly quality, and reduce the product development cycle.

  9. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  10. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  11. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  12. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  13. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  14. STS-61 Space Shuttle mission report

    NASA Technical Reports Server (NTRS)

    Fricke, Robert W., Jr.

    1994-01-01

    The STS-61 Space Shuttle Program Mission Report summarizes the Hubble Space Telescope (HST) servicing mission as well as the Orbiter, External Tank (ET), Solid Rocket Booster (SRB), Redesigned Solid Rocket Motor (RSRM), and the Space Shuttle main engine (SSME) systems performance during the fifty-ninth flight of the Space Shuttle Program and fifth flight of the Orbiter vehicle Endeavour (OV-105). In addition to the Orbiter, the flight vehicle consisted of an ET designated as ET-60; three SSME's which were designated as serial numbers 2019, 2033, and 2017 in positions 1, 2, and 3, respectively; and two SRB's which were designated BI-063. The RSRM's that were installed in each SRB were designated as 360L023A (lightweight) for the left SRB, and 360L023B (lightweight) for the right SRB. This STS-61 Space Shuttle Program Mission Report fulfills the Space Shuttle Program requirement as documented in NSTS 07700, Volume 8, Appendix E. That document requires that each major organizational element supporting the Program report the results of its hardware evaluation and mission performance plus identify all related in-flight anomalies. The primary objective of the STS-61 mission was to perform the first on-orbit servicing of the Hubble Space Telescope. The servicing tasks included the installation of new solar arrays, replacement of the Wide Field/Planetary Camera I (WF/PC I) with WF/PC II, replacement of the High Speed Photometer (HSP) with the Corrective Optics Space Telescope Axial Replacement (COSTAR), replacement of rate sensing units (RSU's) and electronic control units (ECU's), installation of new magnetic sensing systems and fuse plugs, and the repair of the Goddard High Resolution Spectrometer (GHRS). Secondary objectives were to perform the requirements of the IMAX Cargo Bay Camera (ICBC), the IMAX Camera, and the Air Force Maui Optical Site (AMOS) Calibration Test.

  15. Marias Pass, Contact Zone of Two Martian Rock Units

    NASA Image and Video Library

    2015-12-17

    This view from the Mast Camera (Mastcam) in NASA's Curiosity Mars rover shows the "Marias Pass" area where a lower and older geological unit of mudstone -- the pale zone in the center of the image -- lies in contact with an overlying geological unit of sandstone. Just before Curiosity reached Marias Pass, the rover's laser-firing Chemistry and Camera (ChemCam) instrument examined a rock found to be rich in silica, a mineral-forming chemical. This scene combines several images taken on May 22, 2015, during the 992nd Martian day, or sol, of Curiosity's work on Mars. The scene is presented with a color adjustment that approximates white balancing, to resemble how the rocks and sand would appear under daytime lighting conditions on Earth. http://photojournal.jpl.nasa.gov/catalog/?IDNumber=pia20174

  16. KSC-02pd1131

    NASA Image and Video Library

    2002-07-10

    KENNEDY SPACE CENTER, FLA. -- Scott Minnick, with United Space Alliance, places a fiber-optic camera inside the flow line on Endeavour. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.

  17. KSC-02pd1128

    NASA Image and Video Library

    2002-07-10

    KENNEDY SPACE CENTER, FLA. -- Scott Minnick, with United Space Alliance, places a fiber-optic camera inside the flow line on Endeavour. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.

  18. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    PubMed

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  19. Web Camera Use of Mothers and Fathers When Viewing Their Hospitalized Neonate.

    PubMed

    Rhoads, Sarah J; Green, Angela; Gauss, C Heath; Mitchell, Anita; Pate, Barbara

    2015-12-01

    Mothers and fathers of neonates hospitalized in a neonatal intensive care unit (NICU) differ in their experiences related to NICU visitation. To describe the frequency and length of maternal and paternal viewing of their hospitalized neonates via a Web camera. A total of 219 mothers and 101 fathers used the Web camera that allows 24/7 NICU viewing from September 1, 2010, to December 31, 2012, which included 40 mother and father dyads. We conducted a review of the Web camera's Web site log-on records in this nonexperimental, descriptive study. Mothers and fathers had a significant difference in the mean number of log-ons to the Web camera system (P = .0293). Fathers virtually visited the NICU less often than mothers, but there was not a statistical difference between mothers and fathers in terms of the mean total number of minutes viewing the neonate (P = .0834) or in the maximum number of minutes of viewing in 1 session (P = .6924). Patterns of visitations over time were not measured. Web camera technology could be a potential intervention to aid fathers in visiting their neonates. Both parents should be offered virtual visits using the Web camera and oriented regarding how to use the Web camera. These findings are important to consider when installing Web cameras in a NICU. Future research should continue to explore Web camera use in NICUs.

  20. CMOS Camera Array With Onboard Memory

    NASA Technical Reports Server (NTRS)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  1. X-ray topography as a process control tool in semiconductor and microcircuit manufacture

    NASA Technical Reports Server (NTRS)

    Parker, D. L.; Porter, W. A.

    1977-01-01

    A bent wafer camera, designed to identify crystal lattice defects in semiconductor materials, was investigated. The camera makes use of conventional X-ray topographs and an innovative slightly bent wafer which allows rays from the point source to strike all portions of the wafer simultaneously. In addition to being utilized in solving production process control problems, this camera design substantially reduces the cost per topograph.

  2. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  3. IET. Weather instrumentation tower, located south of control building. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Weather instrumentation tower, located south of control building. Camera facing west. Date: August 17, 1955. INEEL negative no. 55-2414 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  4. Separating the Laparoscopic Camera Cord From the Monopolar "Bovie" Cord Reduces Unintended Thermal Injury From Antenna Coupling: A Randomized Controlled Trial.

    PubMed

    Robinson, Thomas N; Jones, Edward L; Dunn, Christina L; Dunne, Bruce; Johnson, Elizabeth; Townsend, Nicole T; Paniccia, Alessandro; Stiegmann, Greg V

    2015-06-01

    The monopolar "Bovie" is used in virtually every laparoscopic operation. The active electrode and its cord emit radiofrequency energy that couples (or transfers) to nearby conductive material without direct contact. This phenomenon is increased when the active electrode cord is oriented parallel to another wire/cord. The parallel orientation of the "Bovie" and laparoscopic camera cords cause transfer of energy to the camera cord resulting in cutaneous burns at the camera trocar incision. We hypothesized that separating the active electrode/camera cords would reduce thermal injury occurring at the camera trocar incision in comparison to parallel oriented active electrode/camera cords. In this prospective, blinded, randomized controlled trial, patients undergoing standardized laparoscopic cholecystectomy were randomized to separated active electrode/camera cords or parallel oriented active electrode/camera cords. The primary outcome variable was thermal injury determined by histology from skin biopsied at the camera trocar incision. Eighty-four patients participated. Baseline demographics were similar in the groups for age, sex, preoperative diagnosis, operative time, and blood loss. Thermal injury at the camera trocar incision was lower in the separated versus parallel group (31% vs 57%; P = 0.027). Separation of the laparoscopic camera cord from the active electrode cord decreases thermal injury from antenna coupling at the camera trocar incision in comparison to the parallel orientation of these cords. Therefore, parallel orientation of these cords (an arrangement promoted by integrated operating rooms) should be abandoned. The findings of this study should influence the operating room setup for all laparoscopic cases.

  5. NAOMI instrument: a product line of compact and versatile cameras designed for high resolution missions in Earth observation

    NASA Astrophysics Data System (ADS)

    Luquet, Ph.; Chikouche, A.; Benbouzid, A. B.; Arnoux, J. J.; Chinal, E.; Massol, C.; Rouchit, P.; De Zotti, S.

    2017-11-01

    EADS Astrium is currently developing a new product line of compact and versatile instruments for high resolution missions in Earth Observation. First version has been developed in the frame of the ALSAT-2 contract awarded by the Algerian Space Agency (ASAL) to EADS Astrium. The Silicon Carbide Korsch-type telescope coupled with a multilines detector array offers a 2.5 m GSD in PAN band at Nadir @ 680 km altitude (10 m GSD in the four multispectral bands) with a 17.5 km swath width. This compact camera - 340 (W) x 460 (L) x 510 (H) mm3, 13 kg - is embarked on a Myriade-type small platform. The electronics unit accommodates video, housekeeping, and thermal control functions and also a 64 Gbit mass memory. Two satellites are developed; the first one is planned to be launched on mid 2009. Several other versions of the instrument have already been defined with enhanced resolution or/and larger field of view.

  6. Fishery research in the Great Lakes using a low-cost remotely operated vehicle

    USGS Publications Warehouse

    Kennedy, Gregory W.; Brown, Charles L.; Argyle, Ray L.

    1988-01-01

    We used a MiniROVER MK II remotely operated vehicle (ROV) to collect ground-truth information on fish and their habitat in the Great Lakes that have traditionally been collected by divers, or with static cameras, or submersibles. The ROV, powered by 4 thrusters and controlled by the pilot at the surface, was portable and efficient to operate throughout the Great Lakes in 1987, and collected a total of 30 h of video data recorded for later analysis. We collected 50% more substrate information per unit of effort with the ROV than with static cameras. Fish behavior ranged from no avoidance reaction in ambient light, to erratic responses in the vehicle lights. The ROV's field of view depended on the time of day, light levels, and density of zooplankton. Quantification of the data collected with the ROV (either physical samples or video image data) will serve to enhance the use of the ROV as a research tool to conduct fishery research on the Great Lakes.

  7. Photothermal camera port accessory for microscopic thermal diffusivity imaging

    NASA Astrophysics Data System (ADS)

    Escola, Facundo Zaldívar; Kunik, Darío; Mingolo, Nelly; Martínez, Oscar Eduardo

    2016-06-01

    The design of a scanning photothermal accessory is presented, which can be attached to the camera port of commercial microscopes to measure thermal diffusivity maps with micrometer resolution. The device is based on the thermal expansion recovery technique, which measures the defocusing of a probe beam due to the curvature induced by the local heat delivered by a focused pump beam. The beam delivery and collecting optics are built using optical fiber technology, resulting in a robust optical system that provides collinear pump and probe beams without any alignment adjustment necessary. The quasiconfocal configuration for the signal collection using the same optical fiber sets very restrictive conditions on the positioning and alignment of the optical components of the scanning unit, and a detailed discussion of the design equations is presented. The alignment procedure is carefully described, resulting in a system so robust and stable that no further alignment is necessary for the day-to-day use, becoming a tool that can be used for routine quality control, operated by a trained technician.

  8. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  9. 4. INTERIOR VIEW OF CLUB HOUSE REFRIGERATION UNIT, SHOWING COOLING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. INTERIOR VIEW OF CLUB HOUSE REFRIGERATION UNIT, SHOWING COOLING COILS AND CORK-LINED ROOM. CAMERA IS BETWEEN SEVEN AND EIGHT FEET ABOVE FLOOR LEVEL, FACING SOUTHEAST. - Swan Falls Village, Clubhouse 011, Snake River, Kuna, Ada County, ID

  10. A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment

    PubMed Central

    Gao, Junyao; Zhao, Fangzhou; Liu, Yi

    2017-01-01

    This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560

  11. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  12. Multiplane and Spectrally-Resolved Single Molecule Localization Microscopy with Industrial Grade CMOS cameras.

    PubMed

    Babcock, Hazen P

    2018-01-29

    This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.

  13. An autonomous sensor module based on a legacy CCTV camera

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  14. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  15. Visual control of robots using range images.

    PubMed

    Pomares, Jorge; Gil, Pablo; Torres, Fernando

    2010-01-01

    In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  16. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  17. Camera trap placement and the potential for bias due to trails and other features

    PubMed Central

    Forrester, Tavis D.

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11–33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9–38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design. PMID:29045478

  18. Camera trap placement and the potential for bias due to trails and other features.

    PubMed

    Kolowski, Joseph M; Forrester, Tavis D

    2017-01-01

    Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.

  19. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  20. HDEV Flight Assembly

    NASA Image and Video Library

    2014-05-07

    View of the High Definition Earth Viewing (HDEV) flight assembly installed on the exterior of the Columbus European Laboratory module. Image was released by astronaut on Twitter. The High Definition Earth Viewing (HDEV) experiment places four commercially available HD cameras on the exterior of the space station and uses them to stream live video of Earth for viewing online. The cameras are enclosed in a temperature specific housing and are exposed to the harsh radiation of space. Analysis of the effect of space on the video quality, over the time HDEV is operational, may help engineers decide which cameras are the best types to use on future missions. High school students helped design some of the cameras' components, through the High Schools United with NASA to Create Hardware (HUNCH) program, and student teams operate the experiment.

  1. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    PubMed Central

    Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.

    2016-01-01

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054

  2. LPT. Low power test control building (TAN641) interior. Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Low power test control building (TAN-641) interior. Camera facing northeast at what remains of control room console. Cut in wall at right of view shows west wall of northern test cell. INEEL negative no. HD-40-4-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  3. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  4. Testbed for remote telepresence research

    NASA Astrophysics Data System (ADS)

    Adnan, Sarmad; Cheatham, John B., Jr.

    1992-11-01

    Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.

  5. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  6. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  7. Astronaut Alan Bean flies the Astronaut Maneuvering Equipment in the OWS

    NASA Image and Video Library

    1973-08-28

    S73-34207 (28 Aug. 1973) --- Astronaut Alan L. Bean, Skylab 3 commander, flies the M509 astronaut Maneuvering Equipment, as seen in this photographic reproduction taken from a television transmission made by a color television camera in the Orbital Workshop (OWS) of the Skylab space station in Earth orbit. Bean is strapped into the back-mounted, hand-controlled Automatically Stabilized Maneuvering Unit (ASMU). The M509 exercise was in the forward dome area of the OWS. The dome area is about 22 feet in diameter and 19 feet from top to bottom. Photo credit: NASA

  8. Three-dimensional illumination procedure for photodynamic therapy of dermatology

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-ming; Zhang, Feng-juan; Dong, Fei; Zhou, Ya

    2014-09-01

    Light dosimetry is an important parameter that affects the efficacy of photodynamic therapy (PDT). However, the irregular morphologies of lesions complicate lesion segmentation and light irradiance adjustment. Therefore, this study developed an illumination demo system comprising a camera, a digital projector, and a computing unit to solve these problems. A three-dimensional model of a lesion was reconstructed using the developed system. Hierarchical segmentation was achieved with the superpixel algorithm. The expected light dosimetry on the targeted lesion was achieved with the proposed illumination procedure. Accurate control and optimization of light delivery can improve the efficacy of PDT.

  9. Optical design of the SuMIRe/PFS spectrograph

    NASA Astrophysics Data System (ADS)

    Pascal, Sandrine; Vives, Sébastien; Barkhouser, Robert; Gunn, James E.

    2014-07-01

    The SuMIRe Prime Focus Spectrograph (PFS), developed for the 8-m class SUBARU telescope, will consist of four identical spectrographs, each receiving 600 fibers from a 2394 fiber robotic positioner at the telescope prime focus. Each spectrograph includes three spectral channels to cover the wavelength range [0.38-1.26] um with a resolving power ranging between 2000 and 4000. A medium resolution mode is also implemented to reach a resolving power of 5000 at 0.8 um. Each spectrograph is made of 4 optical units: the entrance unit which produces three corrected collimated beams and three camera units (one per spectral channel: "blue, "red", and "NIR"). The beam is split by using two large dichroics; and in each arm, the light is dispersed by large VPH gratings (about 280x280mm). The proposed optical design was optimized to achieve the requested image quality while simplifying the manufacturing of the whole optical system. The camera design consists in an innovative Schmidt camera observing a large field-of-view (10 degrees) with a very fast beam (F/1.09). To achieve such a performance, the classical spherical mirror is replaced by a catadioptric mirror (i.e meniscus lens with a reflective surface on the rear side of the glass, like a Mangin mirror). This article focuses on the optical architecture of the PFS spectrograph and the perfornance achieved. We will first described the global optical design of the spectrograph. Then, we will focus on the Mangin-Schmidt camera design. The analysis of the optical performance and the results obtained are presented in the last section.

  10. Overview of a Hybrid Underwater Camera System

    DTIC Science & Technology

    2014-07-01

    meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111

  11. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  12. The High Energy Detector of Simbol-X

    NASA Astrophysics Data System (ADS)

    Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Le Mer, I.; Pinsard, F.; Cara, C.; Goetschy, A.; Martignac, J.; Tauzin, G.; Hervé, S.; Laurent, P.; Chipaux, R.; Rio, Y.; Fontignie, J.; Horeau, B.; Authier, M.; Ferrando, P.

    2009-05-01

    The High Energy Detector (HED) is one of the three detection units on board the Simbol-X detector spacecraft. It is placed below the Low Energy Detector so as to collect focused photons in the energy range from 8 to 80 keV. It consists of a mosaic of 64 independent cameras, divided in 8 sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique component. The status of the HED design will be reported. The promising results obtained from the first micro-camera prototypes called Caliste 64 and Caliste 256 will be presented to illustrate the expected performance of the instrument.

  13. LED-based endoscopic light source for spectral imaging

    NASA Astrophysics Data System (ADS)

    Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.

    2016-03-01

    Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.

  14. 67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    67. DETAIL OF VIDEO CAMERA CONTROL PANEL LOCATED IMMEDIATELY WEST OF ASSISTANT LAUNCH CONDUCTOR PANEL SHOWN IN CA-133-1-A-66 - Vandenberg Air Force Base, Space Launch Complex 3, Launch Operations Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  15. Astronaut Susan J. Helms Mounts a Videao Camera in Zarya

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Astronaut Susan J. Helms, Expedition Two flight engineer, mounts a video camera onto a bracket in the Russian Zarya or Functional Cargo Block (FGB) of the International Space Station (ISS). Launched by a Russian Proton rocket from the Baikonu Cosmodrome on November 20, 1998, the Unites States-funded and Russian-built Zarya was the first element of the ISS, followed by the U.S. Unity Node.

  16. The Status of the NASA All Sky Fireball Network

    NASA Technical Reports Server (NTRS)

    Cooke, William J.; Moser, Danielle E.

    2011-01-01

    Established by the NASA Meteoroid Environment Office, the NASA All Sky Fireball Network consists of 6 meteor video cameras in the southern United States, with plans to expand to 15 cameras by 2013. As of mid-2011, the network had detected 1796 multi-station meteors, including meteors from 43 different meteor showers. The current status of the NASA All Sky Fireball Network is described, alongside preliminary results.

  17. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  18. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E. V.; Nebeker, S.

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less

  19. STS-99 Commander Kregel poses with EARTHKAM camera on OV-105's flight deck

    NASA Image and Video Library

    2000-03-30

    STS099-314-035 (11-22 February 2000) ---Astronaut Kevin R. Kregel, mission commander, works with camera equipment, which was used for the EarthKAM project. The camera stayed busy throughout the 11-day mission taking vertical imagery of the Earth points of opportunity for the project. Students across the United States and in France, Germany and Japan took photos throughout the STS-99 mission. And they are using these new photos, plus all the images already available in the EarthKAM system, to enhance their classroom learning in Earth and space science, social studies, geography, mathematics and more.

  20. The upgrade of the H.E.S.S. cameras

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; Naurois, Mathieu de; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-12-01

    The High Energy Stereoscopic System (HESS) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in the Khomas highland in Namibia. It was built to detect Very High Energy (VHE > 100 GeV) cosmic gamma rays. Since 2003, HESS has discovered the majority of the known astrophysical VHE gamma-ray sources, opening a new observational window on the extreme non-thermal processes at work in our universe. HESS consists of four 12-m diameter Cherenkov telescopes (CT1-4), which started data taking in 2002, and a larger 28-m telescope (CT5), built in 2012, which lowers the energy threshold of the array to 30 GeV . The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Only the PMTs and their HV supplies have been kept from the original cameras. Novel technical solutions have been introduced, which will find their way into some of the Cherenkov cameras foreseen for the next-generation Cherenkov Telescope Array (CTA) observatory. In particular, the camera readout system is the first large-scale system based on the analog memory chip NECTAr, which was designed for CTA cameras. The camera control subsystems and the control software framework also pursue an innovative design, exploiting cutting-edge hardware and software solutions which excel in performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 have been upgraded in fall 2016. Together they will assure continuous operation of HESS at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded HESS camera.

  1. Electronic Still Camera image of Astronaut Claude Nicollier working with RMS

    NASA Image and Video Library

    1993-12-05

    S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  2. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  3. A New Electrocardiograph Employing the Cathode Ray Oscillograph as the Recording Device

    PubMed Central

    Robertson, Douglas

    1934-01-01

    The advantages of the cathode ray tube as an electrical recording instrument are unique. It has no inherent inertia, so that there is no distortion from this source as there is in every known electro-mechanical recorder. The workings of the cathode ray oscillograph are explained and discussed. Immediate visual observation of the electrocardiogram is obtained by the use of a new fluorescent screen, which is described, and the mechanism of a suitable “time base” circuit for this purpose is explained. Some of the problems associated with the design of an amplifier, distortionless as far as electrocardiography is concerned, are dealt with, including the use of long “time constants” and the employment of a suitable filter circuit. The design of a suitable camera unit (for photographic recording) is discussed. A method of neutralizing interference picked up from alternating current electric light mains is explained and illustrated. The apparatus consists of four easily portable, and mechanically robust, units. The Recorder Unit, the Amplifier Unit, the H.T. (high tension) Supply Unit, and the Camera Unit. ImagesFig. 1Fig. 2Fig. 5Fig. 6Fig. 9Fig. 10 PMID:19989971

  4. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  5. The sensory power of cameras and noise meters for protest surveillance in South Korea.

    PubMed

    Kim, Eun-Sung

    2016-06-01

    This article analyzes sensory aspects of material politics in social movements, focusing on two police tools: evidence-collecting cameras and noise meters for protest surveillance. Through interviews with Korean political activists, this article examines the relationship between power and the senses in the material culture of Korean protests and asks why cameras and noise meters appeared in order to control contemporary peaceful protests in the 2000s. The use of cameras and noise meters in contemporary peaceful protests evidences the exercise of what Michel Foucault calls 'micro-power'. Building on material culture studies, this article also compares the visual power of cameras with the sonic power of noise meters, in terms of a wide variety of issues: the control of things versus words, impacts on protest size, differential effects on organizers and participants, and differences in timing regarding surveillance and punishment.

  6. Method and system for providing autonomous control of a platform

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)

    2012-01-01

    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).

  7. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  8. PBF Control Building (PER619) south facade. Camera faces north. Note ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Control Building (PER-619) south facade. Camera faces north. Note buried tanks with bollards protecting their access hatches. Date: July 2004. INEEL negative no. HD-41-10-4 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  9. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  10. Bridging FPGA and GPU technologies for AO real-time control

    NASA Astrophysics Data System (ADS)

    Perret, Denis; Lainé, Maxime; Bernard, Julien; Gratadour, Damien; Sevin, Arnaud

    2016-07-01

    Our team has developed a common environment for high performance simulations and real-time control of AO systems based on the use of Graphics Processors Units in the context of the COMPASS project. Such a solution, based on the ability of the real time core in the simulation to provide adequate computing performance, limits the cost of developing AO RTC systems and makes them more scalable. A code developed and validated in the context of the simulation may be injected directly into the system and tested on sky. Furthermore, the use of relatively low cost components also offers significant advantages for the system hardware platform. However, the use of GPUs in an AO loop comes with drawbacks: the traditional way of offloading computation from CPU to GPUs - involving multiple copies and unacceptable overhead in kernel launching - is not well suited in a real time context. This last application requires the implementation of a solution enabling direct memory access (DMA) to the GPU memory from a third party device, bypassing the operating system. This allows this device to communicate directly with the real-time core of the simulation feeding it with the WFS camera pixel stream. We show that DMA between a custom FPGA-based frame-grabber and a computation unit (GPU, FPGA, or Coprocessor such as Xeon-phi) across PCIe allows us to get latencies compatible with what will be needed on ELTs. As a fine-grained synchronization mechanism is not yet made available by GPU vendors, we propose the use of memory polling to avoid interrupts handling and involvement of a CPU. Network and Vision protocols are handled by the FPGA-based Network Interface Card (NIC). We present the results we obtained on a complete AO loop using camera and deformable mirror simulators.

  11. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    ERIC Educational Resources Information Center

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  12. An Integrated System for Wildlife Sensing

    DTIC Science & Technology

    2014-08-14

    design requirement. “Sensor Controller” software. A custom Sensor Controller application was developed for the Android device in order to collect...and log readings from that device’s sensors. “Camera Controller” software. A custom Camera Controller application was developed for the Android device...into 2 separate Android applications (Figure 4). The Sensor Controller logs readings periodically from the Android device’s organic sensors, and

  13. On-ground and in-orbit characterisation plan for the PLATO CCD normal cameras

    NASA Astrophysics Data System (ADS)

    Gow, J. P. D.; Walton, D.; Smith, A.; Hailey, M.; Curry, P.; Kennedy, T.

    2017-11-01

    PLAnetary Transits and Ocillations (PLATO) is the third European Space Agency (ESA) medium class mission in ESA's cosmic vision programme due for launch in 2026. PLATO will carry out high precision un-interrupted photometric monitoring in the visible band of large samples of bright solar-type stars. The primary mission goal is to detect and characterise terrestrial exoplanets and their systems with emphasis on planets orbiting in the habitable zone, this will be achieved using light curves to detect planetary transits. PLATO uses a novel multi- instrument concept consisting of 26 small wide field cameras The 26 cameras are made up of a telescope optical unit, four Teledyne e2v CCD270s mounted on a focal plane array and connected to a set of Front End Electronics (FEE) which provide CCD control and readout. There are 2 fast cameras with high read-out cadence (2.5 s) for magnitude ~ 4-8 stars, being developed by the German Aerospace Centre and 24 normal (N) cameras with a cadence of 25 s to monitor stars with a magnitude greater than 8. The N-FEEs are being developed at University College London's Mullard Space Science Laboratory (MSSL) and will be characterised along with the associated CCDs. The CCDs and N-FEEs will undergo rigorous on-ground characterisation and the performance of the CCDs will continue to be monitored in-orbit. This paper discusses the initial development of the experimental arrangement, test procedures and current status of the N-FEE. The parameters explored will include gain, quantum efficiency, pixel response non-uniformity, dark current and Charge Transfer Inefficiency (CTI). The current in-orbit characterisation plan is also discussed which will enable the performance of the CCDs and their associated N-FEE to be monitored during the mission, this will include measurements of CTI giving an indication of the impact of radiation damage in the CCDs.

  14. Overview of the Multi-Spectral Imager on the NEAR spacecraft

    NASA Astrophysics Data System (ADS)

    Hawkins, S. E., III

    1996-07-01

    The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.

  15. Approach for Improving the Integrated Sensor Orientation

    NASA Astrophysics Data System (ADS)

    Mitishita, E.; Ercolin Filho, L.; Graça, N.; Centeno, J.

    2016-06-01

    The direct determination of exterior orientation parameters (EOP) of aerial images via integration of the Inertial Measurement Unit (IMU) and GPS is often used in photogrammetric mapping nowadays. The accuracies of the EOP depend on the accurate parameters related to sensors mounting when the job is performed (offsets of the IMU relative to the projection centre and the angles of boresigth misalignment between the IMU and the photogrammetric coordinate system). In principle, when the EOP values do not achieve the required accuracies for the photogrammetric application, the approach, known as Integrated Sensor Orientation (ISO), is used to refine the direct EOP. ISO approach requires accurate Interior Orientation Parameters (IOP) and standard deviation of the EOP under flight condition. This paper investigates the feasibility of use the in situ camera calibration to obtain these requirements. The camera calibration uses a small sub block of images, extracted from the entire block. A digital Vexcel UltraCam XP camera connected to APPLANIX POS AVTM system was used to get two small blocks of images that were use in this study. The blocks have different flight heights and opposite flight directions. The proposed methodology improved significantly the vertical and horizontal accuracies of the 3D point intersection. Using a minimum set of control points, the horizontal and vertical accuracies achieved nearly one image pixel of resolution on the ground (GSD). The experimental results are shown and discussed.

  16. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  17. ETR ELECTRICAL BUILDING, TRA648. EMERGENCY STANDBY GENERATOR AND DIESEL UNIT. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR ELECTRICAL BUILDING, TRA-648. EMERGENCY STANDBY GENERATOR AND DIESEL UNIT. METAL ROOF AND PUMICE BLOCK WALLS. CAMERA FACING SOUTHWEST. INL NEGATIVE NO. 56-3708. R.G. Larsen, Photographer, 11/13/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  18. Improving Robotic Operator Performance Using Augmented Reality

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles K.; Pace, John W.

    2007-01-01

    The Special Purpose Dexterous Manipulator (SPDM) is a two-armed robot that functions as an extension to the end effector of the Space Station Robotics Manipulator System (SSRMS), currently in use on the International Space Station (ISS). Crew training for the SPDM is accomplished using a robotic hardware simulator, which performs most of SPDM functions under normal static Earth gravitational forces. Both the simulator and SPDM are controlled from a standard robotic workstation using a laptop for the user interface and three monitors for camera views. Most operations anticipated for the SPDM involve the manipulation, insertion, and removal of any of several types of Orbital Replaceable Unit (ORU), modules which control various ISS functions. Alignment tolerances for insertion of the ORU into its receptacle are 0.25 inch and 0.5 degree from nominal values. The pre-insertion alignment task must be performed within these tolerances by using available video camera views of the intrinsic features of the ORU and receptacle, without special registration markings. Since optimum camera views may not be available, and dynamic orbital lighting conditions may limit periods of viewing, a successful ORU insertion operation may require an extended period of time. This study explored the feasibility of using augmented reality (AR) to assist SPDM operations. Geometric graphical symbols were overlaid on one of the workstation monitors to afford cues to assist the operator in attaining adequate pre-insertion ORU alignment. Twelve skilled subjects performed eight ORU insertion tasks using the simulator with and without the AR symbols in a repeated measures experimental design. Results indicated that using the AR symbols reduced pre-insertion alignment error for all subjects and reduced the time to complete pre-insertion alignment for most subjects.

  19. The ISS Fluids Integrated Rack (FIR): a Summary of Capabilities

    NASA Astrophysics Data System (ADS)

    Gati, F.; Hill, M. E.

    2002-01-01

    The Fluids Integrated Rack (FIR) is a modular, multi-user scientific research facility that will fly in the U.S. laboratory module, Destiny, of the International Space Station (ISS). The FIR will be one of the two racks that will make up the Fluids and Combustion Facility (FCF) - the other being the Combustion Integrated Rack (CIR). The ISS will provide the FCF with the necessary resources, such as power and cooling. While the ISS crew will be available for experiment operations, their time will be limited. The FCF is, therefore, being designed for autonomous operations and remote control operations. Control of the FCF will be primarily through the Telescience Support Center (TSC) at the Glenn Research Center. The FCF is being designed to accommodate a wide range of combustion and fluids physics experiments within the ISS resources and constraints. The primary mission of the FIR, however, is to accommodate experiments from four major fluids physics disciplines: Complex Fluids; Multiphase Flow and Heat Transfer; Interfacial Phenomena; and Dynamics and Stability. The design of the FIR is flexible enough to accommodate experiments from other science disciplines such as Biotechnology. The FIR flexibility is a result of the large volume dedicated for experimental hardware, easily re-configurable diagnostics that allow for unique experiment configurations, and it's customizable software. The FIR will utilize six major subsystems to accommodate this broad scope of fluids physics experiments. The major subsystems are: structural, environmental, electrical, gaseous, command and data management, and imagers and illumination. Within the rack, the FIR's structural subsystem provides an optics bench type mechanical interface for the precise mounting of experimental hardware; including optical components. The back of the bench is populated with FIR avionics packages and light sources. The interior of the rack is isolated from the cabin through two rack doors that are hinged near the top and bottom of the rack. Transmission of micro-gravity disturbances to and from the rack is minimized through the Active Rack Isolation System (ARIS). The environmental subsystem will utilize air and water to remove heat generated by facility and experimental hardware. The air will be circulated throughout the rack and will be cooled by an air-water heat exchanger. Water will be used directly to cool some of the FIR components and will also be available to cool experiment hardware as required. The electrical subsystem includes the Electrical Power Control Unit (EPCU), which provides 28 VDC and 120 VDC power to the facility and the experiment hardware. The EPCU will also provide power management and control functions, as well as fault protection capabilities. The FIR will provide access to the ISS gaseous nitrogen and vacuum systems. These systems are available to support experiment operations such as the purging of experimental cells, creating flows within experimental cells and providing dry conditions where needed. The FIR Command and Data Management subsystem (CDMS) provides command and data handling for both facility and experiment hardware. The Input Output Processor (IOP) provides the overall command and data management functions for the rack including downlinking or writing data to removable drives. The IOP will also monitor the health and status of the rack subsystems. The Image Processing and Storage Units (IPSU) will perform diagnostic control and image data acquisition functions. An IPSU will be able to control a digital camera, receive image data from that camera and process/ compress image data as necessary. The Fluids Science and Avionics Package (FSAP) will provide the primary control over an experiment. The FSAP contains various computer boards/cards that will perform data and control functions. To support the imaging needs, cameras and illumination sources will be available to the investigator. Both color analog and black and white digital cameras with lenses are expected. These cameras will be capable of high resolution and, separately, frame rates up to 32,000 frames per second. Lenses for these cameras will provide both microscopic and macroscopic views. The FIR will provide two illumination sources, a 532 nm Nd:YAG laser and a white light source, both with adjustable power output. The FIR systems are being designed to maximize the amount of science that can be done on-orbit. Experiments will be designed and efficiently operated. Each individual experiment must determine the best configuration of utilizing facility capabilities and resources with augmentation of specific experiment hardware. Efficient operations will be accomplished via a combination of on-orbit physical component change-outs or processing by the crew, and software updates via ground commanding or by the crew. Careful coordination by ground and on-orbit personnel regarding the on-orbit storage and downlinking of image data will also be very important.

  20. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  1. LEONA: Transient Luminous Event and Thunderstorm High Energy Emission Collaborative Network in Latin America

    NASA Astrophysics Data System (ADS)

    Sao Sabbas, F. T.

    2012-12-01

    This project has the goal of establishing the Collaborative Network LEONA, to study the electrodynamical coupling of the atmospheric layers signaled by Transient Luminous Events - TLEs and high energy emissions from thunderstorms. We will develop and install a remotely controlled network of cameras to perform TLE observations in different locations in South America and one neutron detector in southern Brazil. The camera network will allow building a continuous data set of the phenomena studied in this continent. The first two trial units of the camera network are already installed, in Brazil and Peru, and two more will be installed until December 2012, in Argentina and Brazil. We expect to determine the TLE geographic distribution, occurrence rate, morphology, and possible coupling with other geophysical phenomena in South America, such as the South Atlantic Magnetic Anomaly - SAMA. We also expect to study thunderstorm neutron emissions in a region of intense electrical activity, measuring neutron fluxes with high time resolution simultaneously with TLEs and lightning for the first time in South America. Using an intensified high-speed camera for TLE observation during 2 campaigns we expect to be able to determine the duration and spatial- temporal development of the TLEs observed, to study the structure and initiation of sprites and to measure the velocity of development of sprite structures and the sprite delay. The camera was acquired via the FAPESP project DEELUMINOS (2005-2010), which also nucleated our research group Atmospheric Electrodynamical Coupling - ACATMOS. LEONA will nucleate this research in other institutions in Brazil and other countries in South America, providing continuity for this important research in our region. The camera network will be an unique tool to perform consistent long term TLE observation, and in fact is the only way to accumulate a data set for a climatological study of South America, since satellite instrumentation turns off in this region to avoid damages due to the South Atlantic Magnetic Anomaly - SAMA. Thus this project is not only a potential benchmark in TLE research by creating a collaborative network in Latin America and nucleating this research locally, it is also strategic since LEONA's camera network will be able to provide extremely valuable information to fill up this gap that most satellite measurements have.

  2. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  3. LPT. Shield test control building (TAN645), north facade. Camera facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test control building (TAN-645), north facade. Camera facing south. Obsolete sign dating from post-1970 program says "Energy and Systems Technology Experimental Facility, INEL." INEEL negative no. HD-40-5-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  4. Cameras for semiconductor process control

    NASA Technical Reports Server (NTRS)

    Porter, W. A.; Parker, D. L.

    1977-01-01

    The application of X-ray topography to semiconductor process control is described, considering the novel features of the high speed camera and the difficulties associated with this technique. The most significant results on the effects of material defects on device performance are presented, including results obtained using wafers processed entirely within this institute. Defects were identified using the X-ray camera and correlations made with probe data. Also included are temperature dependent effects of material defects. Recent applications and improvements of X-ray topographs of silicon-on-sapphire and gallium arsenide are presented with a description of a real time TV system prototype and of the most recent vacuum chuck design. Discussion is included of our promotion of the use of the camera by various semiconductor manufacturers.

  5. Optical Meteor Systems Used by the NASA Meteoroid Environment Office

    NASA Technical Reports Server (NTRS)

    Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.

    2015-01-01

    The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.

  6. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  7. An evaluation of fish behavior upstream of the water temperature control tower at Cougar Dam, Oregon, using acoustic cameras, 2013

    USGS Publications Warehouse

    Adams, Noah S.; Smith, Collin; Plumb, John M.; Hansen, Gabriel S.; Beeman, John W.

    2015-07-06

    This report describes the initial year of a 2-year study to determine the feasibility of using acoustic cameras to monitor fish movements to help inform decisions about fish passage at Cougar Dam near Springfield, Oregon. Specifically, we used acoustic cameras to measure fish presence, travel speed, and direction adjacent to the water temperature control tower in the forebay of Cougar Dam during the spring (May, June, and July) and fall (September, October, and November) of 2013. Cougar Dam is a high-head flood-control dam, and the water temperature control tower enables depth-specific water withdrawals to facilitate adjustment of water temperatures released downstream of the dam. The acoustic cameras were positioned at the upstream entrance of the tower to monitor free-ranging subyearling and yearling-size juvenile Chinook salmon (Oncorhynchus tshawytscha). Because of the large size discrepancy, we could distinguish juvenile Chinook salmon from their predators, which enabled us to measure predators and prey in areas adjacent to the entrance of the tower. We used linear models to quantify and assess operational and environmental factors—such as time of day, discharge, and water temperature—that may influence juvenile Chinook salmon movements within the beam of the acoustic cameras. Although extensive milling behavior of fish near the structure may have masked directed movement of fish and added unpredictability to fish movement models, the acoustic-camera technology enabled us to ascertain the general behavior of discrete size classes of fish. Fish travel speed, direction of travel, and counts of fish moving toward the water temperature control tower primarily were influenced by the amount of water being discharged through the dam.

  8. CICADA -- Configurable Instrument Control and Data Acquisition

    NASA Astrophysics Data System (ADS)

    Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    CICADA (Young et al. 1997) is a multi-process, distributed application for the control of astronomical data acquisition systems. It comprises elements that control the operation of, and data flow from CCD camera systems; and the operation of telescope instrument control systems. CICADA can be used to dynamically configure support for astronomical instruments that can be made up of multiple cameras and multiple instrument controllers. Each camera is described by a hierarchy of parts that are each individually configured and linked together. Most of CICADA is written in C++ and much of the configurability of CICADA comes from the use of inheritance and polymorphism. An example of a multiple part instrument configuration -- a wide field imager (WFI) -- is described here. WFI, presently under construction, is made up of eight 2k x 4k CCDs with dual SDSU II controllers and will be used at Siding Spring's ANU 40in and AAO 3.9m telescopes.

  9. New Modular Camera No Ordinary Joe

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.

  10. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooling, J. C.; Lumpkin, A. H.

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. Themore » optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.« less

  11. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  12. On The Export Control Of High Speed Imaging For Nuclear Weapons Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Scott Avery; Altherr, Michael Robert

    Since the Manhattan Project, the use of high-speed photography, and its cousins flash radiography1 and schieleren photography have been a technological proliferation concern. Indeed, like the supercomputer, the development of high-speed photography as we now know it essentially grew out of the nuclear weapons program at Los Alamos2,3,4. Naturally, during the course of the last 75 years the technology associated with computers and cameras has been export controlled by the United States and others to prevent both proliferation among non-P5-nations and technological parity among potential adversaries among P5 nations. Here we revisit these issues as they relate to high-speed photographicmore » technologies and make recommendations about how future restrictions, if any, should be guided.« less

  13. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  14. KSC-02pd1130

    NASA Image and Video Library

    2002-07-10

    KENNEDY SPACE CENTER, FLA. -- With the engines removed from Endeavour, the flow line can be inspected. On the right, Gerry Kathka, with United Space Alliance, hands part of a fiber-optic camera system to Scott Minnick, left. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.

  15. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  16. STS-109 Crew Interviews: James H. Newman

    NASA Technical Reports Server (NTRS)

    2002-01-01

    STS-109 Mission Specialist James H. Newman is seen during a prelaunch interview. He answers questions about his inspiration to become an astronaut, his career path, and his most memorable experiences. He gives details on the mission's goals and objectives, which focus on the refurbishing of the Hubble Space Telescope, and his role in the mission. He provides a brief background on the Hubble Space Telescope, and explains the plans for the rendezvous of the Columbia Orbiter with the Hubble Space Telescope. He provides details and timelines for each of the planned Extravehicular Activities (EVAs), which include replacing the solar arrays, changing the Power Control Unit, installing the Advanced Camera for Surveys (ACS), and installing a new Cryocooler for the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). He gives further explanation of each of these pieces of equipment. He also describes the break-out plan in place for these spacewalks. The interview ends with Newman explaining the details of a late addition to the mission's tasks, which is to replace a reaction wheel on the Hubble Space Telescope.

  17. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  18. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  19. A Major Upgrade of the H.E.S.S. Cherenkov Cameras

    NASA Astrophysics Data System (ADS)

    Lypova, Iryna; Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-03-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in Namibia. It was built to detect Very High Energy (VHE, >100 GeV) cosmic gamma rays, and consists of four 12 m diameter Cherenkov telescopes (CT1-4), built in 2003, and a larger 28 m telescope (CT5), built in 2012. The larger mirror surface of CT5 permits to lower the energy threshold of the array down to 30 GeV. The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Technical solutions forseen for the next-generation Cherenkov Telescope Array (CTA) observatory have been introduced, most notably the readout is based on the NECTAr analog memory chip. The camera control subsystems and the control software framework also pursue an innovative design, increasing the camera performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 will upgraded in Fall 2016. Together they will assure continuous operation of H.E.S.S at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded H.E.S.S. camera.

  20. EVA 3 - Linnehan portrait

    NASA Image and Video Library

    2002-03-06

    STS109-322-028 (6 March 2002) --- Astronaut Richard M. Linnehan, STS-109 mission specialist, participates in the third of five space walks to perform work on the Hubble Space Telescope (HST). Linnehan's sun shield reflects astronaut John M. Grunsfeld and the blue and white Earth's hemisphere as well as one of the telescope's new solar arrays. The third overall STS-109 extravehicular activity (EVA) marked the second of three for Linnehan and Grunsfeld, payload commander. On this particular walk, the two turned off the telescope in order to replace the power control unit or PCU--the heart of its power system. Grunsfeld took this photo with a 35mm camera.

  1. Space telescope optical telescope assembly/scientific instruments. Phase B: -Preliminary design and program definition study; Volume 2A: Planetary camera report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Development of the F/48, F/96 Planetary Camera for the Large Space Telescope is discussed. Instrument characteristics, optical design, and CCD camera submodule thermal design are considered along with structural subsystem and thermal control subsystem. Weight, electrical subsystem, and support equipment requirements are also included.

  2. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  3. The ASTRI SST-2M telescope prototype for the Cherenkov Telescope Array: camera DAQ software architecture

    NASA Astrophysics Data System (ADS)

    Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo

    2014-07-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.

  4. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  5. 70. VIEW OF UNIT 2 THROUGH ACCESS DOOR, LOOKING DOWN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    70. VIEW OF UNIT 2 THROUGH ACCESS DOOR, LOOKING DOWN AT MAIN SHAFT. NOTE WELDER'S SIGNATURE IN SHADOWS IN UPPER LEFT CORNER AND PHOTOGRAPHER'S STROBE POWER CABLE IN LOWER RIGHT CORNER. ORIENTATION OF CAMERA IS FACING LEFT BANK, PERPENDICULAR TO RIVER FLOW - Swan Falls Dam, Snake River, Kuna, Ada County, ID

  6. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  7. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  8. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  9. Combined use of a priori data for fast system self-calibration of a non-rigid multi-camera fringe projection system

    NASA Astrophysics Data System (ADS)

    Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard

    2017-06-01

    In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.

  10. Timing generator of scientific grade CCD camera and its implementation based on FPGA technology

    NASA Astrophysics Data System (ADS)

    Si, Guoliang; Li, Yunfei; Guo, Yongfei

    2010-10-01

    The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.

  11. Projective rectification of infrared images from air-cooled condenser temperature measurement by using projection profile features and cross-ratio invariability.

    PubMed

    Xu, Lijun; Chen, Lulu; Li, Xiaolu; He, Tao

    2014-10-01

    In this paper, we propose a projective rectification method for infrared images obtained from the measurement of temperature distribution on an air-cooled condenser (ACC) surface by using projection profile features and cross-ratio invariability. In the research, the infrared (IR) images acquired by the four IR cameras utilized are distorted to different degrees. To rectify the distorted IR images, the sizes of the acquired images are first enlarged by means of bicubic interpolation. Then, uniformly distributed control points are extracted in the enlarged images by constructing quadrangles with detected vertical lines and detected or constructed horizontal lines. The corresponding control points in the anticipated undistorted IR images are extracted by using projection profile features and cross-ratio invariability. Finally, a third-order polynomial rectification model is established and the coefficients of the model are computed with the mapping relationship between the control points in the distorted and anticipated undistorted images. Experimental results obtained from an industrial ACC unit show that the proposed method performs much better than any previous method we have adopted. Furthermore, all rectified images are stitched together to obtain a complete image of the whole ACC surface with a much higher spatial resolution than that obtained by using a single camera, which is not only useful but also necessary for more accurate and comprehensive analysis of ACC performance and more reliable optimization of ACC operations.

  12. Long-term effects of prenatal exposure to low levels of gamma rays on open-field activity in male mice.

    PubMed

    Minamisawa, T; Hirokaga, K

    1995-11-01

    The open-field activity of first-generation (F1) hybrid male C57BL/6 x C3H mice irradiated with gamma rays on day 14 of gestation was studied at the following ages: 6-7 months (young), 12-13 months (adult) and 19-20 months (old). Doses were 0.5 Gy or 1.0 Gy. Open-field activity was recorded with a camera. The camera output signal was recorded every second through an A/D converter to a personal computer. The field was divided into 25 8-cm2 units. All recordings were continuous for 60 min. The walking speed of the 1.0-Gy group recorded at 19-20 months was higher than that for the comparably aged control group. The time which the irradiated group, recorded at 19-20 months, spent in the corner fields was high in comparison with the control group at the same age. Conversely, the time spent by the irradiated group in the middle fields when recorded at 19-20 months was shorter than in the comparably aged control group. No effect of radiation was shown for any of the behaviors observed and recorded at 6-7 and 12-13 months. The results demonstrate that such exposure to gamma rays on day 14 of gestation results in behavioral changes which occur at 19-20 months but not at 6-7 or 12-13 months.

  13. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  14. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  15. Backing collisions: a study of drivers' eye and backing behaviour using combined rear-view camera and sensor systems.

    PubMed

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2010-04-01

    Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.

  16. Backing collisions: a study of drivers’ eye and backing behaviour using combined rear-view camera and sensor systems

    PubMed Central

    Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe

    2012-01-01

    Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812

  17. Viewpoint matters: objective performance metrics for surgeon endoscope control during robot-assisted surgery.

    PubMed

    Jarc, Anthony M; Curet, Myriam J

    2017-03-01

    Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.

  18. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  19. Rocket Spectroheliograph for the Mg II Line at 2802.7 A.

    PubMed

    Fredga, K

    1969-02-01

    A rocket-borne spectroheliograph designed to take monochromatic pictures of the sun in the Mg II line at 2802.7 A is described in detail. The photographic system consists of a Questar telescope, a Solc type birefringent filter, and an automatic Robot camera. The double Solc filter has a spectral bandwidth of 3.5 A. The two units in the double filter have been thoroughly tested and are compared with theoretically calculated transmission curves. Two new types of linear film polarizers for the uv region have been tested and used in the filter. A temperature control unit was developed which stabilized the filter temperature in flight to within +/-0.2 degrees C. The instrument has been tested in vacuum and to the Aerobee 150 vibration specifications. It has been flown and successfully recovered three times and performed excellently during each fight.

  20. Protein-crystal growth experiment (planned)

    NASA Technical Reports Server (NTRS)

    Fujita, S.; Asano, K.; Hashitani, T.; Kitakohji, T.; Nemoto, H.; Kitamura, S.

    1988-01-01

    To evaluate the effectiveness of a microgravity environment on protein crystal growth, a system was developed using 5 cubic feet Get Away Special payload canister. In the experiment, protein (myoglobin) will be simultaneously crystallized from an aqueous solution in 16 crystallization units using three types of crystallization methods, i.e., batch, vapor diffusion, and free interface diffusion. Each unit has two compartments: one for the protein solution and the other for the ammonium sulfate solution. Compartments are separated by thick acrylic or thin stainless steel plates. Crystallization will be started by sliding out the plates, then will be periodically recorded up to 120 hours by a still camera. The temperature will be passively controlled by a phase transition thermal storage component and recorded in IC memory throughout the experiment. Microgravity environment can then be evaluated for protein crystal growth by comparing crystallization in space with that on Earth.

  1. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  2. Wearable camera-derived microenvironments in relation to personal exposure to PM2.5.

    PubMed

    Salmon, Maëlle; Milà, Carles; Bhogadi, Santhi; Addanki, Srivalli; Madhira, Pavitra; Muddepaka, Niharika; Mora, Amaravathi; Sanchez, Margaux; Kinra, Sanjay; Sreekanth, V; Doherty, Aiden; Marshall, Julian D; Tonne, Cathryn

    2018-05-17

    Data regarding which microenvironments drive exposure to air pollution in low and middle income countries are scarce. Our objective was to identify sources of time-resolved personal PM 2.5 exposure in peri-urban India using wearable camera-derived microenvironmental information. We conducted a panel study with up to 6 repeated non-consecutive 24 h measurements on 45 participants (186 participant-days). Camera images were manually annotated to derive visual concepts indicative of microenvironments and activities. Men had slightly higher daily mean PM 2.5 exposure (43 μg/m 3 ) compared to women (39 μg/m 3 ). Cameras helped identify that men also had higher exposures when near a biomass cooking unit (mean (sd) μg/m 3 : 119 (383) for men vs 83 (196) for women) and presence in the kitchen (133 (311) for men vs 48 (94) for women). Visual concepts associated in regression analysis with higher 5-minute PM 2.5 for both sexes included: smoking (+93% (95% confidence interval: 63%, 129%) in men, +29% (95% CI: 2%, 63%) in women), biomass cooking unit (+57% (95% CI: 28%, 93%) in men, +69% (95% CI: 48%, 93%) in women), visible flame or smoke (+90% (95% CI: 48%, 144%) in men, +39% (95% CI: 6%, 83%) in women), and presence in the kitchen (+49% (95% CI: 27%, 75%) in men, +14% (95% CI: 7%, 20%) in women). Our results indicate wearable cameras can provide objective, high time-resolution microenvironmental data useful for identifying peak exposures and providing insights not evident using standard self-reported time-activity. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  4. Use of an Acoustic Camera to Evaluate the Performance of Tickler Chains and Draghead Deflectors for Sea Turtle Protection during Hopper Dredging in the United States of America

    DTIC Science & Technology

    2018-05-01

    performance nor effectiveness in protecting sea turtles has been documented.This study was the first step in evaluating TTC as a potential replacement for...draghead turtle deflectors. The primary objective was to evaluate and document operational performance of this technology, not effectiveness of reducing...incidental take. TTC operational performance was monitored using underwater camera systems over a short period of time whereas effectiveness for

  5. Astronauts Koichi Wakata (left) and Daniel T. Barry check the settings on a 35mm camera during an

    NASA Technical Reports Server (NTRS)

    1996-01-01

    STS-72 TRAINING VIEW --- Astronauts Koichi Wakata (left) and Daniel T. Barry check the settings on a 35mm camera during an STS-72 training session. Wakata is a mission specialist, representing Japan's National Space Development Agency (NASDA) and Barry is a United States astronaut assigned as mission specialist for the same mission. The two are on the aft flight deck of the fixed base Shuttle Mission Simulator (SMS) at the Johnson Space Center (JSC).

  6. MS Grunsfeld wearing EMU in Airlock

    NASA Image and Video Library

    2002-03-08

    STS109-E-5721 (8 March 2002) --- Astronaut John M. Grunsfeld, STS-109 payload commander, attired in the extravehicular mobility unit (EMU) space suit, completed suited is in the Space Shuttle Columbia’s airlock. Grunsfeld and Richard M. Linnehan, mission specialist, were about to participate in STS-109’s fifth space walk. Activities for EVA-5 centered around the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) to install a Cryogenic Cooler and its Cooling System Radiator. The image was recorded with a digital still camera.

  7. Predicting Sets and Lists: Theory and Practice

    DTIC Science & Technology

    2015-01-01

    school. No work stands in isolation and this work would not have been possible without my co-authors: • “Contextual Optimization of Lists”: Tommy Liu... IMU Microstrain 3DM-GX3-25 PlayStation Eye camera (640x480 @ 30Hz) Onboard ARM-based Linux computer PlayStation Eye camera (640x480 @ 30Hz) Bumblebee...of the IMU integrated in the Ardupilot unit, we added a Microstrain 3DM-GX3-25 IMU which is used to aid real time pose estimation. There are two

  8. MS Grunsfeld wearing EMU in Airlock joined by MS Newman and Massimino

    NASA Image and Video Library

    2002-03-08

    STS109-E-5722 (8 March 2002) --- Astronaut John M. Grunsfeld (center), STS-109 payload commander, attired in the extravehicular mobility unit (EMU) space suit, is photographed with astronauts James H. Newman (left) and Michael J. Massimino, both mission specialists, prior to the fifth space walk. Activities for EVA-5 centered around the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) to install a Cryogenic Cooler and its Cooling System Radiator. The image was recorded with a digital still camera.

  9. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  10. Single chip camera device having double sampling operation

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2002-01-01

    A single chip camera device is formed on a single substrate including an image acquisition portion for control portion and the timing circuit formed on the substrate. The timing circuit also controls the photoreceptors in a double sampling mode in which are reset level is first read and then after an integration time a charged level is read.

  11. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  12. Context-based handover of persons in crowd and riot scenarios

    NASA Astrophysics Data System (ADS)

    Metzler, Jürgen

    2015-02-01

    In order to control riots in crowds, it is helpful to get ringleaders under control and pull them out of the crowd if one has become an offender. A great support to achieve these tasks is the capability of observing the crowd and ringleaders automatically by using cameras. It also allows a better conservation of evidence in riot control. A ringleader who has become an offender should be tracked across and recognized by several cameras, regardless of whether overlapping camera's fields of view exist or not. We propose a context-based approach for handover of persons between different camera fields of view. This approach can be applied for overlapping as well as for non-overlapping fields of view, so that a fast and accurate identification of individual persons in camera networks is feasible. Within the scope of this paper, the approach is applied to a handover of persons between single images without having any temporal information. It is particularly developed for semiautomatic video editing and a handover of persons between cameras in order to improve conservation of evidence. The approach has been developed on a dataset collected during a Crowd and Riot Control (CRC) training of the German armed forces. It consists of three different levels of escalation. First, the crowd started with a peaceful demonstration. Later, there were violent protests, and third, the riot escalated and offenders bumped into the chain of guards. One result of the work is a reliable context-based method for person re-identification between single images of different camera fields of view in crowd and riot scenarios. Furthermore, a qualitative assessment shows that the use of contextual information can support this task additionally. It can decrease the needed time for handover and the number of confusions which supports the conservation of evidence in crowd and riot scenarios.

  13. ICE stereocamera system - photogrammetric setup for retrieval and analysis of small scale sea ice topography

    NASA Astrophysics Data System (ADS)

    Divine, Dmitry; Pedersen, Christina; Karlsen, Tor Ivan; Aas, Harald; Granskog, Mats; Renner, Angelika; Spreen, Gunnar; Gerland, Sebastian

    2013-04-01

    A new thin-ice Arctic paradigm requires reconsideration of the set of parameterizations of mass and energy exchange within the ocean-sea-ice-atmosphere system used in modern CGCMs. Such a reassessment would require a comprehensive collection of measurements made specifically on first-year pack ice with a focus on summer melt season when the difference from typical conditions for the earlier multi-year Arctic sea ice cover becomes most pronounced. Previous in situ studies have demonstrated a crucial importance of smaller (i.e. less than 10 m) scale surface topography features for the seasonal evolution of pack ice. During 2011-2012 NPI developed a helicopter borne ICE stereocamera system intended for mapping the sea ice surface topography and aerial photography. The hardware component of the system comprises two Canon 5D Mark II cameras, combined GPS/INS unit by "Novatel" and a laser altimeter mounted in a single enclosure outside the helicopter. The unit is controlled by a PXI chassis mounted inside the helicopter cabin. The ICE stereocamera system was deployed for the first time during the 2012 summer field season. The hardware setup has proven to be highly reliable and was used in about 30 helicopter flights over Arctic sea-ice during July-September. Being highly automated it required a minimal human supervision during in-flight operation. The deployment of the camera system was mostly done in combination with the EM-bird, which measures sea-ice thickness, and this combination provides an integrated view of sea ice cover along the flight track. During the flight the cameras shot sequentially with a time interval of 1 second each to ensure sufficient overlap between subsequent images. Some 35000 images of sea ice/water surface captured per camera sums into 6 Tb of data collected during its first field season. The reconstruction of the digital elevation model of sea ice surface will be done using SOCET SET commercial software. Refraction at water/air interface can also be taken into account, providing the valuable data on melt pond coverage, depth and bottom topography -the primary goals for the system at its present stage. Preliminary analysis of the reconstructed 3D scenes of ponded first year ice for some selected sites has shown a good agreement with in situ measurements demonstrating a good scientific potential of the ICE stereocamera system.

  14. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  15. Uncooled infrared sensors: rapid growth and future perspective

    NASA Astrophysics Data System (ADS)

    Balcerak, Raymond S.

    2000-07-01

    The uncooled infrared cameras are now available for both the military and commercial markets. The current camera technology incorporates the fruits of many years of development, focusing on the details of pixel design, novel material processing, and low noise read-out electronics. The rapid insertion of cameras into systems is testimony to the successful completion of this 'first phase' of development. In the military market, the first uncooled infrared cameras will be used for weapon sights, driver's viewers and helmet mounted cameras. Major commercial applications include night driving, security, police and fire fighting, and thermography, primarily for preventive maintenance and process control. The technology for the next generation of cameras is even more demanding, but within reach. The paper outlines the technology program planned for the next generation of cameras, and the approaches to further enhance performance, even to the radiation limit of thermal detectors.

  16. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  17. Novel low-cost vision-sensing technology with controllable of exposal time for welding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenzeng; Wang, Bin; Chen, Nian; Cao, Yipeng

    2005-02-01

    In the process of robot Welding, position of welding seam and welding pool shape is detected by CCD camera for quality control and seam tracking in real-time. It is difficult to always get a clear welding image in some welding methods, such as TIG welding. A novel idea that the exposal time of CCD camera is automatically controlled by arc voltage or arc luminance is proposed to get clear welding image. A set of special device and circuits are added to a common industrial CCD camera in order to flexibly control the CCD to start or close exposal by control of the internal clearing signal of the accumulated charge. Two special vision sensors according to the idea are developed. Their exposal grabbing can be triggered respectively by the arc voltage and the variety of the arc luminance. Two prototypes have been designed and manufactured. Experiments show that they can stably grab clear welding images at appointed moment, which is a basic for the feedback control of automatic welding.

  18. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  19. Use of camera drive in stereoscopic display of learning contents of introductory physics

    NASA Astrophysics Data System (ADS)

    Matsuura, Shu

    2011-03-01

    Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.

  20. STS-109 Onboard Photo of Extra-Vehicular Activity (EVA)

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This is an onboard photo of the Hubble Space Telescope (HST) power control unit (PCU), the heart of the HST's power system. STS-109 payload commander John M. Grunsfeld, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its PCU while participating in the third of five spacewalks dedicated to servicing and upgrading the HST. Other upgrades performed were: replacement of the solar array panels; replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-Object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed the system upgrades. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  1. Space Shuttle Projects

    NASA Image and Video Library

    2002-03-01

    This is an onboard photo of the Hubble Space Telescope (HST) power control unit (PCU), the heart of the HST's power system. STS-109 payload commander John M. Grunsfeld, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its PCU while participating in the third of five spacewalks dedicated to servicing and upgrading the HST. Other upgrades performed were: replacement of the solar array panels; replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-Object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed the system upgrades. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.

  2. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  3. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  4. Effects of automated speed enforcement in Montgomery County, Maryland, on vehicle speeds, public opinion, and crashes.

    PubMed

    Hu, Wen; McCartt, Anne T

    2016-09-01

    In May 2007, Montgomery County, Maryland, implemented an automated speed enforcement program, with cameras allowed on residential streets with speed limits of 35 mph or lower and in school zones. In 2009, the state speed camera law increased the enforcement threshold from 11 to 12 mph over the speed limit and restricted school zone enforcement hours. In 2012, the county began using a corridor approach, in which cameras were periodically moved along the length of a roadway segment. The long-term effects of the speed camera program on travel speeds, public attitudes, and crashes were evaluated. Changes in travel speeds at camera sites from 6 months before the program began to 7½ years after were compared with changes in speeds at control sites in the nearby Virginia counties of Fairfax and Arlington. A telephone survey of Montgomery County drivers was conducted in Fall 2014 to examine attitudes and experiences related to automated speed enforcement. Using data on crashes during 2004-2013, logistic regression models examined the program's effects on the likelihood that a crash involved an incapacitating or fatal injury on camera-eligible roads and on potential spillover roads in Montgomery County, using crashes in Fairfax County on similar roads as controls. About 7½ years after the program began, speed cameras were associated with a 10% reduction in mean speeds and a 62% reduction in the likelihood that a vehicle was traveling more than 10 mph above the speed limit at camera sites. When interviewed in Fall 2014, 95% of drivers were aware of the camera program, 62% favored it, and most had received a camera ticket or knew someone else who had. The overall effect of the camera program in its modified form, including both the law change and the corridor approach, was a 39% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury. Speed cameras alone were associated with a 19% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury, the law change was associated with a nonsignificant 8% increase, and the corridor approach provided an additional 30% reduction over and above the cameras. This study adds to the evidence that speed cameras can reduce speeding, which can lead to reductions in speeding-related crashes and crashes involving serious injuries or fatalities.

  5. Quantification of Fugitive Methane Emissions with Spatially Correlated Measurements Collected with Novel Plume Camera

    NASA Astrophysics Data System (ADS)

    Tsai, Tracy; Rella, Chris; Crosson, Eric

    2013-04-01

    Quantification of fugitive methane emissions from unconventional natural gas (i.e. shale gas, tight sand gas, etc.) production, processing, and transport is essential for scientists, policy-makers, and the energy industry, because methane has a global warming potential of at least 21 times that of carbon dioxide over a span of 100 years [1]. Therefore, fugitive emissions reduce any environmental benefits to using natural gas instead of traditional fossil fuels [2]. Current measurement techniques involve first locating all the possible leaks and then measuring the emission of each leak. This technique is a painstaking and slow process that cannot be scaled up to the large size of the natural gas industry in which there are at least half a million natural gas wells in the United States alone [3]. An alternative method is to calculate the emission of a plume through dispersion modeling. This method is a scalable approach since all the individual leaks within a natural gas facility can be aggregated into a single plume measurement. However, plume dispersion modeling requires additional knowledge of the distance to the source, atmospheric turbulence, and local topography, and it is a mathematically intensive process. Therefore, there is a need for an instrument capable of simple, rapid, and accurate measurements of fugitive methane emissions on a per well head scale. We will present the "plume camera" instrument, which simultaneously measures methane at different spatial points or pixels. The spatial correlation between methane measurements provides spatial information of the plume, and in addition to the wind measurement collected with a sonic anemometer, the flux can be determined. Unlike the plume dispersion model, this approach does not require knowledge of the distance to the source and atmospheric conditions. Moreover, the instrument can fit inside a standard car such that emission measurements can be performed on a per well head basis. In a controlled experiment with known releases from a methane tank, a 2-pixel plume camera measured 496 ± 160 sccm from a release of 650 sccm located 21 m away, and 4,180 ± 962 sccm from a release of 3,400 sccm located 49 m away. These results in addition to results with a higher-pixel camera will be discussed. Field campaign data collected with the plume camera pixels mounted onto a vehicle and driven through the natural gas fields in the Uintah Basin (Utah, United States) will also be presented along with the limitations and advantages of the instrument. References: 1. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.). IPCC, 2007: Climate Change 2007: The Physical Science Basis of the Fourth Assessment Report. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. 2. R.W. Howarth, R. Santoro, and A. Ingraffea. "Methane and the greenhouse-gas footprint of natural gas from shale formations." Climate Change, 106, 679 (2011). 3. U.S. Energy Information Administration. "Number of Producing Wells." . Accessed 6 January 2013.

  6. A personalized food allergen testing platform on a cellphone

    PubMed Central

    Coskun, Ahmet F.; Wong, Justin; Khodadadi, Delaram; Nagi, Richie; Tey, Andrew; Ozcan, Aydogan

    2013-01-01

    We demonstrate a personalized food allergen testing platform, termed iTube, running on a cellphone that images and automatically analyses colorimetric assays performed in test tubes toward sensitive and specific detection of allergens in food samples. This cost-effective and compact iTube attachment, weighing approximately 40 grams, is mechanically installed on the existing camera unit of a cellphone where the test and control tubes are inserted from the side and are vertically illuminated by two separate light-emitting-diodes. The illumination light is absorbed by the allergen assay that is activated within the tubes, causing an intensity change in the acquired images by the cellphone camera. These transmission images of the sample and control tubes are digitally processed within1 sec using a smart application running on the same cellphone for detection and quantification of allergen contamination in food products. We evaluated the performance of this cellphone based iTube platform using different types of commercially available cookies, where the existence of peanuts was accurately quantified after a sample preparation and incubation time of ~20 min per test. This automated and cost-effective personalized food allergen testing tool running on cellphones can also permit uploading of test results to secure servers to create personal and/or public spatio-temporal allergen maps, which can be useful for public health in various settings. PMID:23254910

  7. Long-term changes in open field activity of male mice irradiated with low levels of gamma rays at late stage of development.

    PubMed

    Minamisawa, T; Hirokaga, K

    1996-06-01

    The open field activity of first generation (F1) hybrid male C57BL/6 x C3H mice irradiated with gamma-rays on the 14th day of gestation was studied at the following ages: 6-7 months, 12-13 months and 19-20 months. Doses were 0.1 Gy or 0.2 Gy. Open field activity was recorded with a camera. The camera output signal was recorded every sec through an A/D converter to a personal computer. The field was divided into 25 units of 8 cm square. All recordings were continuous for 60 min. The time which the 0.2-Gy group recorded at 6-7 months, spent in the 4 squares in the corner fields was high in comparison with the control group at the same age. The walking distance of the 0.1-Gy group recorded at 12-13 months was longer than that for the age matched control group. No effect of radiation was found on any of the behaviors observed and recorded at 19-20 months. The results demonstrate that exposure to low levels of gamma-rays on the 14th day of gestation results in behavioral changes, which occur at 6-7 and 12-13 months but not 19-20 months.

  8. System selects framing rate for spectrograph camera

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.

  9. Video model deformation system for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.

  10. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  11. Science observations with the IUE using the one-gyro mode

    NASA Technical Reports Server (NTRS)

    Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, Chris R.; Perez, M. R.; Webb, J.

    1990-01-01

    The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked which will rely on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.

  12. Science observations with the IUE using the one-gyro mode

    NASA Technical Reports Server (NTRS)

    Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, C.; Perez, M.; Webb, J.

    1990-01-01

    The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked, which will relay on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.

  13. The spacecraft control laboratory experiment optical attitude measurement system

    NASA Technical Reports Server (NTRS)

    Welch, Sharon S.; Montgomery, Raymond C.; Barsky, Michael F.

    1991-01-01

    A stereo camera tracking system was developed to provide a near real-time measure of the position and attitude of the Spacecraft COntrol Laboratory Experiment (SCOLE). The SCOLE is a mockup of the shuttle-like vehicle with an attached flexible mast and (simulated) antenna, and was designed to provide a laboratory environment for the verification and testing of control laws for large flexible spacecraft. Actuators and sensors located on the shuttle and antenna sense the states of the spacecraft and allow the position and attitude to be controlled. The stereo camera tracking system which was developed consists of two position sensitive detector cameras which sense the locations of small infrared LEDs attached to the surface of the shuttle. Information on shuttle position and attitude is provided in six degrees-of-freedom. The design of this optical system, calibration, and tracking algorithm are described. The performance of the system is evaluated for yaw only.

  14. Automatic exposure control for space sequential camera

    NASA Technical Reports Server (NTRS)

    Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.

    1975-01-01

    The final report for the automatic exposure control study for space sequential cameras, for the NASA Johnson Space Center is presented. The material is shown in the same sequence that the work was performed. The purpose of the automatic exposure control is to automatically control the lens iris as well as the camera shutter so that the subject is properly exposed on the film. A study of design approaches is presented. Analysis of the light range of the spectrum covered indicates that the practical range would be from approximately 20 to 6,000 foot-lamberts, or about nine f-stops. Observation of film available from space flights shows that optimum scene illumination is apparently not present in vehicle interior photography as well as in vehicle-to-vehicle situations. The evaluation test procedure for a breadboard, and the results, which provided information for the design of a brassboard are given.

  15. Astronaut John Young in shadow of Lunar Module behind ultraviolet camera

    NASA Image and Video Library

    1972-04-22

    AS16-114-18439 (22 April 1972) --- Astronaut Charles M. Duke Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, commander, during the mission's second extravehicular activity (EVA). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.

  16. Astronaut Charles M. Duke, Jr., in shadow of Lunar Module behind ultraviolet camera

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Astronaut Charles M. Duke, Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, mission commander, during the mission's second extravehicular activity (EVA-2). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (lm) 'Orion' to explore the Descartes highlands landing site on the Moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (csm) 'Casper' in lunar orbit.

  17. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    NASA Astrophysics Data System (ADS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-07-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an infarct in intensive care units, as well as in neurology to detect the grade of a cerebral vascular insult, in pregnancy to detect a pulmonary capillary embolism, or in presurgical oncology to identify sentinel lymph nodes. The physical tests and the clinical imaging capabilities of the experimental device which have been performed by IPB (France) and SHC (Hungary), agree with the expected performances better than those of a cardiac conventional γ- camera except for dynamic studies.

  18. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  19. Stroboscope Controller for Imaging Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Jensen, Scott; Marmie, John; Mai, Nghia

    2004-01-01

    A versatile electronic timing-and-control unit, denoted a rotorcraft strobe controller, has been developed for use in controlling stroboscopes, lasers, video cameras, and other instruments for capturing still images of rotating machine parts especially helicopter rotors. This unit is designed to be compatible with a variety of sources of input shaftangle or timing signals and to be capable of generating a variety of output signals suitable for triggering instruments characterized by different input-signal specifications. It is also designed to be flexible and reconfigurable in that it can be modified and updated through changes in its control software, without need to change its hardware. Figure 1 is a block diagram of the rotorcraft strobe controller. The control processor is a high-density complementary metal oxide semiconductor, singlechip 8-bit microcontroller. It is connected to a 32K x 8 nonvolatile static random-access memory (RAM) module. Also connected to the control processor is a 32K 8 electrically programmable read-only-memory (EPROM) module, which is used to store the control software. Digital logic support circuitry is implemented in a field-programmable gate array (FPGA). A 240 x 128-dot, 40- character 16-line liquid-crystal display (LCD) module serves as a graphical user interface; the user provides input through a 16-key keypad mounted next to the LCD. A 12-bit digital-to-analog converter (DAC) generates a 0-to-10-V ramp output signal used as part of a rotor-blade monitoring system, while the control processor generates all the appropriate strobing signals. Optocouplers are used to isolate all input and output digital signals, and optoisolators are used to isolate all analog signals. The unit is designed to fit inside a 19-in. (.48-cm) rack-mount enclosure. Electronic components are mounted on a custom printed-circuit board (see Figure 2). Two power-conversion modules on the printedcircuit board convert AC power to +5 VDC and 15 VDC, respectively.

  20. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  1. Onboard data processing and compression for a four-sensor suite: the SERENA experiment.

    NASA Astrophysics Data System (ADS)

    Mura, A.; Orsini, S.; Di Lellis, A.; Lazzarotto, F.; Barabash, S.; Livi, S.; Torkar, K.; Milillo, A.; De Angelis, E.

    2013-09-01

    SERENA (Search for Exospheric Refilling and Emitted Natural Abundances) is an instrument package that will fly on board the BepiColombo/Mercury Planetary Orbiter (MPO). SERENA instrument includes four units: ELENA (Emitted Low Energy Neutral Atoms), a neutral particle analyzer/imager to detect ion sputtering and backscattering from Mercury's surface; STROFIO (Start from a Rotating FIeld mass spectrometer), a mass spectrometer to identify atomic masses released from the surface; MIPA (Miniature Ion Precipitation Analyzer) and PICAM (Planetary Ion Camera), two ion spectrometers to monitor the precipitating solar wind and measure the plasma environment around Mercury. The System Control Unit architecture is such that all four sensors are connected to a high resolution FPGA, which dialogs with a dedicated high-performance data processing unit. The unpredictability of the data rate, due to the peculiarities of these investigations, leads to several possible scenarios for the data compression and handling. In this study we first discuss about the predicted data volume that comes from the optimized operation strategy, and then we report on the instrument data processing and compression.

  2. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    DOT National Transportation Integrated Search

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  3. Motion Imagery and Robotics Application (MIRA)

    NASA Technical Reports Server (NTRS)

    Martinez, Lindolfo; Rich, Thomas

    2011-01-01

    Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.

  4. Three-dimensional cinematography with control object of unknown shape.

    PubMed

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  5. Performance prediction of optical image stabilizer using SVM for shaker-free production line

    NASA Astrophysics Data System (ADS)

    Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo

    2016-04-01

    Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.

  6. Integrated inertial stellar attitude sensor

    NASA Technical Reports Server (NTRS)

    Brady, Tye M. (Inventor); Kourepenis, Anthony S. (Inventor); Wyman, Jr., William F. (Inventor)

    2007-01-01

    An integrated inertial stellar attitude sensor for an aerospace vehicle includes a star camera system, a gyroscope system, a controller system for synchronously integrating an output of said star camera system and an output of said gyroscope system into a stream of data, and a flight computer responsive to said stream of data for determining from the star camera system output and the gyroscope system output the attitude of the aerospace vehicle.

  7. University of Pennsylvania MAGIC 2010 Final Report

    DTIC Science & Technology

    2011-01-10

    and mapping ( SLAM ) techniques are employed to build a local map of the environment surrounding the robot. Readings from the two complementary LIDAR sen...IMU, LIDAR , Cameras Localization Disrupter UGV Local Navigation Sensors: GPS, IMU, LIDAR , Cameras Laser Control Localization Task Planner Strategy/Plan...various components shown in Figure 2. This is comprised of the following subsystems: • Sensor UGV: Mobile UGVs with LIDAR and camera sensors, GPS, and

  8. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    NASA Astrophysics Data System (ADS)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  9. Automatic Exposure Iris Control (AEIC) for data acquisition camera

    NASA Technical Reports Server (NTRS)

    Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.

    1975-01-01

    A lens design capable of operating over a total range of f/1.4 to f/11.0 with through the lens light sensing is presented along with a system which compensates for ASA film speeds as well as shutter openings. The space shuttle camera system package is designed so that it can be assembled on the existing 16 mm DAC with a minimum of alteration to the camera.

  10. Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes

    NASA Astrophysics Data System (ADS)

    Teppati Losè, L.; Chiabrando, F.; Spanò, A.

    2018-05-01

    The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).

  11. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  12. Earth's Radiation Belts: The View from Juno's Cameras

    NASA Astrophysics Data System (ADS)

    Becker, H. N.; Joergensen, J. L.; Hansen, C. J.; Caplinger, M. A.; Ravine, M. A.; Gladstone, R.; Versteeg, M. H.; Mauk, B.; Paranicas, C.; Haggerty, D. K.; Thorne, R. M.; Connerney, J. E.; Kang, S. S.

    2013-12-01

    Juno's cameras, particle instruments, and ultraviolet imaging spectrograph have been heavily shielded for operation within Jupiter's high radiation environment. However, varying quantities of >1-MeV electrons and >10-MeV protons will be energetic enough to penetrate instrument shielding and be detected as transient background signatures by the instruments. The differing shielding profiles of Juno's instruments lead to differing spectral sensitivities to penetrating electrons and protons within these regimes. This presentation will discuss radiation data collected by Juno in the Earth's magnetosphere during Juno's October 9, 2013 Earth flyby (559 km altitude at closest approach). The focus will be data from Juno's Stellar Reference Unit, Advanced Stellar Compass star cameras, and JunoCam imager acquired during coordinated proton measurements within the inner zone and during the spacecraft's inbound and outbound passages through the outer zone (L ~3-5). The background radiation signatures from these cameras will be correlated with dark count background data collected at these geometries by Juno's Ultraviolet Spectrograph (UVS) and Jupiter Energetic Particle Detector Instrument (JEDI). Further comparison will be made to Van Allen Probe data to calibrate Juno's camera results and contribute an additional view of the Earth's radiation environment during this unique event.

  13. What are we missing? Advantages of more than one viewpoint to estimate fish assemblages using baited video

    PubMed Central

    Huveneers, Charlie; Fairweather, Peter G.

    2018-01-01

    Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386

  14. Feasibility study of a gamma camera for monitoring nuclear materials in the PRIDE facility

    NASA Astrophysics Data System (ADS)

    Jo, Woo Jin; Kim, Hyun-Il; An, Su Jung; Lee, Chae Young; Song, Han-Kyeol; Chung, Yong Hyun; Shin, Hee-Sung; Ahn, Seong-Kyu; Park, Se-Hwan

    2014-05-01

    The Korea Atomic Energy Research Institute (KAERI) has been developing pyroprocessing technology, in which actinides are recovered together with plutonium. There is no pure plutonium stream in the process, so it has an advantage of proliferation resistance. Tracking and monitoring of nuclear materials through the pyroprocess can significantly improve the transparency of the operation and safeguards. An inactive engineering-scale integrated pyroprocess facility, which is the PyRoprocess Integrated inactive DEmonstration (PRIDE) facility, was constructed to demonstrate engineering-scale processes and the integration of each unit process. the PRIDE facility may be a good test bed to investigate the feasibility of a nuclear material monitoring system. In this study, we designed a gamma camera system for nuclear material monitoring in the PRIDE facility by using a Monte Carlo simulation, and we validated the feasibility of this system. Two scenarios, according to locations of the gamma camera, were simulated using GATE (GEANT4 Application for Tomographic Emission) version 6. A prototype gamma camera with a diverging-slat collimator was developed, and the simulated and experimented results agreed well with each other. These results indicate that a gamma camera to monitor the nuclear material in the PRIDE facility can be developed.

  15. Astronaut Russell Schweickart photographed during EVA

    NASA Image and Video Library

    1969-03-06

    AS09-19-2983 (6 March 1969) --- Astronaut Russell L. Schweickart, lunar module pilot, operates a 70mm Hasselblad camera during his extravehicular activity (EVA) on the fourth day of the Apollo 9 Earth-orbital mission. The Command and Service Modules (CSM) and Lunar Module (LM) "Spider" are docked. This view was taken from the Command Module (CM) "Gumdrop". Schweickart, wearing an Extravehicular Mobility Unit (EMU), is standing in "golden slippers" on the LM porch. On his back, partially visible, are a Portable Life Support System (PLSS) and an Oxygen Purge System (OPS). Astronaut James A. McDivitt, Apollo 9 commander, was inside the "Spider". Astronaut David R. Scott, command module pilot, remained at the controls in the CM.

  16. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  17. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  18. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  19. Characterization of SWIR cameras by MRC measurements

    NASA Astrophysics Data System (ADS)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.

  20. Interactive Multimedia Distance Learning (IMDL)

    DTIC Science & Technology

    1999-01-01

    scales to their original values. Media Toolbar. The Media Toolbar provides the instructor the ability to choose camera positions, use the whiteboard ...on the classroom server computer. Whiteboard . Activates a whiteboard associated with the MIDL system. The whiteboard is used to annotate the course...button. Media Control Panel. The Media Control Panel allows the instructor to choose a camera position, use the whiteboard , play some computer video, use

  1. A Multi-Function Guidance, Navigation and Control System for Future Earth and Space Missions

    NASA Technical Reports Server (NTRS)

    Gambino, Joel; Dennehy, Neil; Bauer, Frank H. (Technical Monitor)

    2002-01-01

    Over the past several years the Guidance, Navigation and Control Center (GNCC) at NASA's Goddard Space Flight Center (GSFC) has actively engaged in the development of advanced GN&C technology to enable future Earth and Space science missions. The Multi-Function GN&C System (MFGS) design presented in this paper represents the successful coalescence of several discrete GNCC hardware and software technology innovations into one single highly integrated, compact, low power and low cost unit that simultaneously provides autonomous real time on-board attitude determination solutions and navigation solutions with accuracies that satisfy many future GSFC mission requirements. The MFGS is intended to operate as a single self-contained multifunction unit combining the functions now typically performed by a number of hardware units on a spacecraft. However, recognizing the need to satisfy a variety of future mission requirements, design provisions have been included to permit the unit to interface with a number of external remotely mounted sensors and actuators such as magnetometers, sun sensors, star cameras, reaction wheels and thrusters. The result is a highly versatile MFGS that can be configured in multiple ways to suit a realm of mission-specific GN&C requirements. It is envisioned that the MFGS will perform a mission enabling role by filling the microsat GN&C technology gap. In addition, GSFC believes that the MFGS could be employed to significantly reduce volume, power and mass requirements on conventional satellites.

  2. Economical Video Monitoring of Traffic

    NASA Technical Reports Server (NTRS)

    Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.

    1986-01-01

    Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.

  3. Automated Meteor Fluxes with a Wide-Field Meteor Camera Network

    NASA Technical Reports Server (NTRS)

    Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.

    2013-01-01

    Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also computes the meteor mass from the photometry, a mass flux can be also calculated. Weather conditions in the southeastern United States are seldom ideal, which introduces the difficulty of a variable sky background. First a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes vs camera intensities. The stellar limiting magnitude is derived from this and easily converted to a limiting meteor magnitude for the active shower or sporadic source.

  4. Retinal flavoprotein autofluorescence as a measure of retinal health.

    PubMed

    Elner, Susan G; Elner, Victor M; Field, Matthew G; Park, Seung; Heckenlively, John R; Petty, Howard R

    2008-01-01

    To establish that increased autofluorescence of mitochondrial flavoproteins, an indicator of mitochondrial oxidative stress, correlates with retinal cell dysfunction. Retinal flavoprotein autofluorescence (FA) was imaged in humans with a fundus camera modified with 467DF8-nm excitation and 535-nm emission filters and a back-illuminated, electron-multiplying, charge-coupled device camera interfaced with a computer equipped with customized image capture software. Multiple digital images, centered on the fovea, were obtained from each eye. Histograms of pixel intensities in grayscale units were analyzed for average intensity and average curve width. Adults with diabetes mellitus, age-related macular degeneration (ARMD), central serous retinopathy, and retinal dystrophies, as well as healthy control volunteers, were imaged. Monolayers of cultured human retinal pigment epithelial (HRPE) cells, HRPE cells exposed to sublethal doses of H2O2, and HRPE cells exposed to H2O2 in the presence of antioxidants were imaged for FA using fluorescent photomicroscopy. Control patients demonstrated low levels of retinal FA, which increased progressively with age. Diabetics without visible retinopathy demonstrated increased FA levels compared to control volunteers (P < .001). Diabetics with retinopathy demonstrated significantly higher FA values than those without retinopathy (P < .04). Patients with ARMD, central serous retinopathy, or retinal dystrophies also demonstrated significantly increased FA. Compared to control RPE cells, cells oxidatively stressed with H2O2 had significantly elevated FA (P < .05), which was prevented by antioxidants (P < .05). Retinal FA is significantly increased with age and diseases known to be mediated by oxidative stress. Retinal FA imaging may provide a novel, noninvasive method of assessing retinal health and retinal dysfunction prior to retinal cell death.

  5. A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.

    PubMed

    Leung, Brian; Chau, Tom

    2010-01-01

    The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.

  6. Upgraded cameras for the HESS imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gérard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-François; Gräber, Tobias; Hinton, James; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, François

    2016-08-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes, sensitive to cosmic gamma rays of energies between 30 GeV and several tens of TeV. Four of them started operations in 2003 and their photomultiplier tube (PMT) cameras are currently undergoing a major upgrade, with the goals of improving the overall performance of the array and reducing the failure rate of the ageing systems. With the exception of the 960 PMTs, all components inside the camera have been replaced: these include the readout and trigger electronics, the power, ventilation and pneumatic systems and the control and data acquisition software. New designs and technical solutions have been introduced: the readout makes use of the NECTAr analog memory chip, which samples and stores the PMT signals and was developed for the Cherenkov Telescope Array (CTA). The control of all hardware subsystems is carried out by an FPGA coupled to an embedded ARM computer, a modular design which has proven to be very fast and reliable. The new camera software is based on modern C++ libraries such as Apache Thrift, ØMQ and Protocol buffers, offering very good performance, robustness, flexibility and ease of development. The first camera was upgraded in 2015, the other three cameras are foreseen to follow in fall 2016. We describe the design, the performance, the results of the tests and the lessons learned from the first upgraded H.E.S.S. camera.

  7. Design and performance evaluation of a master controller for endovascular catheterization.

    PubMed

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-01-01

    It is difficult to manipulate a flexible catheter to target a position within a patient's complicated and delicate vessels. However, few researchers focused on the controller designs with much consideration of the natural catheter manipulation skills obtained from manual catheterization. Also, the existing catheter motion measurement methods probably lead to the difficulties in designing the force feedback device. Additionally, the commercially available systems are too expensive which makes them cost prohibitive to most hospitals. This paper presents a simple and cost-effective master controller for endovascular catheterization that can allow the interventionalists to apply the conventional pull, push and twist of the catheter used in current practice. A catheter-sensing unit (used to measure the motion of the catheter) and a force feedback unit (used to provide a sense of resistance force) are both presented. A camera was used to allow a contactless measurement avoiding additional friction, and the force feedback in the axial direction was provided by the magnetic force generated between the permanent magnets and the powered coil. Performance evaluation of the controller was evaluated by first conducting comparison experiments to quantify the accuracy of the catheter-sensing unit, and then conducting several experiments to evaluate the force feedback unit. From the experimental results, the minimum and the maximum errors of translational displacement were 0.003 mm (0.01 %) and 0.425 mm (1.06 %), respectively. The average error was 0.113 mm (0.28 %). In terms of rotational angles, the minimum and the maximum errors were 0.39°(0.33 %) and 7.2°(6 %), respectively. The average error was 3.61°(3.01 %). The force resolution was approximately 25 mN and a maximum current of 3A generated an approximately 1.5 N force. Based on analysis of requirements and state-of-the-art computer-assisted and robot-assisted training systems for endovascular catheterization, a new master controller with force feedback interface was proposed to maintain the natural endovascular catheterization skills of the interventionalists.

  8. Microbolometer characterization with the electronics prototype of the IRCAM for the JEM-EUSO mission

    NASA Astrophysics Data System (ADS)

    Martín, Yolanda; Joven, Enrique; Reyes, Marcos; Licandro, Javier; Maroto, Oscar; Díez-Merino, Laura; Tomas, Albert; Carbonell, Jordi; Morales de los Ríos, J. A.; del Peral, Luis; Rodríguez-Frías, M. D.

    2014-08-01

    JEM-EUSO is a space observatory that will be attached to the Japanese module of the International Space Station (ISS) to observe the UV photon tracks produced by Ultra High Energy Cosmic Rays (UHECR) interacting with atmospheric nuclei. The observatory comprises an Atmospheric Monitoring System (AMS) to gather data about the status of the atmosphere, including an infrared camera (IRCAM) for cloud coverage and cloud top height detection. This paper describes the design and characterization tests of IRCAM, which is the responsibility of the Spanish JEM-EUSO Consortium. The core of IRCAM is a 640x480 microbolometer array, the ULIS 04171, sensitive to radiation in the range 7 to 14 microns. The microbolometer array has been tested using the Front End Electronics Prototype (FEEP). This custom designed electronics corresponds to the Breadboard Model, a design built to verify the camera requirements in the laboratory. The FEEP controls the configuration of the microbolometer, digitizes the detector output, sends data to the Instrument Control Unit (ICU), and controls the microbolometer temperature to a 10 mK stability. Furthermore, the FEEP allows IRCAM to preprocess images by the addition of a powerful FPGA. This prototype has been characterized in the laboratories of Instituto de Astrofisica de Canarias (IAC). Main results, including detector response as a function of the scene temperature, NETD and Non-Uniformity Correction (NUC) are shown. Results about thermal resolution meet the system requirements with a NETD lower than 1K including the narrow band filters which allow us to retrieve the clouds temperature using stereovision algorithms.

  9. A cost effective and high fidelity fluoroscopy simulator using the Image-Guided Surgery Toolkit (IGSTK)

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Jenkins, Brad; Sze, Raymond W.; Yaniv, Ziv

    2014-03-01

    The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom's pose. The camera is mounted on the fluoroscope's intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device's control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.

  10. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  11. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  12. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  13. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  14. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  15. Vision based control of unmanned aerial vehicles with applications to an autonomous four-rotor helicopter, quadrotor

    NASA Astrophysics Data System (ADS)

    Altug, Erdinc

    Our work proposes a vision-based stabilization and output tracking control method for a model helicopter. This is a part of our effort to produce a rotorcraft based autonomous Unmanned Aerial Vehicle (UAV). Due to the desired maneuvering ability, a four-rotor helicopter has been chosen as the testbed. On previous research on flying vehicles, vision is usually used as a secondary sensor. Unlike previous research, our goal is to use visual feedback as the main sensor, which is not only responsible for detecting where the ground objects are but also for helicopter localization. A novel two-camera method has been introduced for estimating the full six degrees of freedom (DOF) pose of the helicopter. This two-camera system consists of a pan-tilt ground camera and an onboard camera. The pose estimation algorithm is compared through simulation to other methods, such as four-point, and stereo method and is shown to be less sensitive to feature detection errors. Helicopters are highly unstable flying vehicles; although this is good for agility, it makes the control harder. To build an autonomous helicopter, two methods of control are studied---one using a series of mode-based, feedback linearizing controllers and the other using a back-stepping control law. Various simulations with 2D and 3D models demonstrate the implementation of these controllers. We also show global convergence of the 3D quadrotor controller even with large calibration errors or presence of large errors on the image plane. Finally, we present initial flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. The helicopter was restricted with a tether to vertical, yaw motions and limited x and y translations.

  16. Design of a CAN bus interface for photoelectric encoder in the spaceflight camera

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Wan, Qiu-hua; She, Rong-hong; Zhao, Chang-hai; Jiang, Yong

    2009-05-01

    In order to make photoelectric encoder usable in a spaceflight camera which adopts CAN bus as the communication method, CAN bus interface of the photoelectric encoder is designed in this paper. CAN bus interface hardware circuit of photoelectric encoder consists of CAN bus controller SJA 1000, CAN bus transceiver TJA1050 and singlechip. CAN bus interface controlling software program is completed in C language. A ten-meter shield twisted pair line is used as the transmission medium in the spaceflight camera, and speed rate is 600kbps.The experiments show that: the photoelectric encoder with CAN bus interface which has the advantages of more reliability, real-time, transfer rate and transfer distance overcomes communication line's shortcomings of classical photoelectric encoder system. The system works well in automatic measuring and controlling system.

  17. Automated remote cameras for monitoring alluvial sandbars on the Colorado River in Grand Canyon, Arizona

    USGS Publications Warehouse

    Grams, Paul E.; Tusso, Robert B.; Buscombe, Daniel

    2018-02-27

    Automated camera systems deployed at 43 remote locations along the Colorado River corridor in Grand Canyon National Park, Arizona, are used to document sandbar erosion and deposition that are associated with the operations of Glen Canyon Dam. The camera systems, which can operate independently for a year or more, consist of a digital camera triggered by a separate data controller, both of which are powered by an external battery and solar panel. Analysis of images for categorical changes in sandbar size show deposition at 50 percent or more of monitoring sites during controlled flood releases done in 2012, 2013, 2014, and 2016. The images also depict erosion of sandbars and show that erosion rates were highest in the first 3 months following each controlled flood. Erosion rates were highest in 2015, the year of highest annual dam release volume. Comparison of the categorical estimates of sandbar change agree with sandbar change (erosion or deposition) measured by topographic surveys in 76 percent of cases evaluated. A semiautomated method for quantifying changes in sandbar area from the remote-camera images by rectifying the oblique images and segmenting the sandbar from the rest of the image is presented. Calculation of sandbar area by this method agrees with sandbar area determined by topographic survey within approximately 8 percent and allows quantification of sandbar area monthly (or more frequently).

  18. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  19. The Melting of Natural Snowflakes Suspended in a Vertical Wind Tunnel.

    DTIC Science & Technology

    1982-06-01

    during the melting process is also recorded by cameras A newly developed valve controls the airflow in the chamber while ma n ining the air conditions...12 3.2. Position of Camera System within the Coldroom ..... ... 14 3.3. Schematic Illustration of the Photographic System . . . 15 3.4...apparatus satisfying the majority of the above mentioned criteria. As the snowflake fell into the apparatus, it would pass a camera /stroboscope arrangement

  20. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  1. HST Solar Arrays photographed by Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.

  2. Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems

    PubMed Central

    Hong, Seunghwan; Park, Ilsuk; Lee, Jisang; Lim, Kwangyong; Choi, Yoonjo; Sohn, Hong-Gyoo

    2017-01-01

    This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively. PMID:28264457

  3. Rapid mapping of landslide disaster using UAV- photogrammetry

    NASA Astrophysics Data System (ADS)

    Cahyono, A. B.; Zayd, R. A.

    2018-03-01

    Unmanned Aerial Vehicle (UAV) systems offered many advantages in several mapping applications such as slope mapping, geohazard studies, etc. This study utilizes UAV system for landslide disaster occurred in Jombang Regency, East Java. This study concentrates on type of rotor-wing UAV, that is because rotor wing units are stable and able to capture images easily. Aerial photograph were acquired in the form of strips which followed the procedure of acquiring aerial photograph where taken 60 photos. Secondary data of ground control points using GPS Geodetic and check points established using Total Station technique was used. The digital camera was calibrated using close range photogrammetric software and the recovered camera calibration parameters were then used in the processing of digital images. All the aerial photographs were processed using digital photogrammetric software and the output in the form of orthophoto was produced. The final result shows a 1: 1500 scale orthophoto map from the data processing with SfM algorithm with GSD accuracy of 3.45 cm. And the calculated volume of contour line delineation of 10527.03 m3. The result is significantly different from the result of terrestrial methode equal to 964.67 m3 or 8.4% of the difference of both.

  4. STS-109 Mission Highlights Resource Tape

    NASA Astrophysics Data System (ADS)

    2002-05-01

    This video, Part 3 of 4, shows the activities of the STS-109 crew (Scott Altman, Commander; Duane Carey, Pilot; John Grunsfeld, Payload Commander; Nancy Currie, James Newman, Richard Linnehan, Michael Massimino, Mission Specialists) during flight days 6 and 7. The activities from other flight days can be seen on 'STS-109 Mission Highlights Resource Tape' Part 1 of 4 (internal ID 2002139471), 'STS-109 Mission Highlights Resource Tape' Part 2 of 4 (internal ID 2002137664), and 'STS-109 Mission Highlights Resource Tape' Part 4 of 4 (internal ID 2002137577). Flight day 6 features a very complicated EVA (extravehicular activity) to service the HST (Hubble Space Telescope). Astronauts Grunsfeld and Linnehan replace the HST's power control unit, disconnecting and reconnecting 36 tiny connectors. The procedure includes the HST's first ever power down. The cleanup of spilled water from the coollant system in Grunsfeld's suit is shown. The pistol grip tool, and two other space tools are also shown. On flight day 7, Newman and Massimino conduct an EVA. They replace the HST's FOC (Faint Object Camera) with the ACS (Advanced Camera for Surveys). The video ends with crew members playing in the shuttle's cabin with a model of the HST.

  5. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  6. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    NASA Astrophysics Data System (ADS)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This has advantages in terms of space saving and improved performance. Measures are being taken to minimise the risk of damage during an instrument change. The detector is cooled by a Stirling cooler, which can be disconnected from the cooler unit without risking damage. Each telescope has a dedicated cooler unit into which the coolant hoses of WiNCam will plug. To overcome an inherent drawback of Stirling coolers, an active vibration damper is incorporated. During an instrument change, the autoguider remains on the telescope, and the filter magazines, shutter and detector package are removed as a single unit. The new alt-az telescope, manufactured by APM-Telescopes, is a 1-metre f/8 Ritchey-Chrétien with optics by LOMO. The field flattening optics were designed by Darragh O'Donoghue to have high UV throughput and uniform encircled energy over the 100mm diameter field. WiNCam will be mounted on one Nasmyth port, with the second port available for SHOC (Sutherland High-speed Optical Camera) and guest instrumentation. The telescope will be located in Sutherland, where an existing dome is being extensively renovated to accommodate it. Commissioning is planned for the second half of 2016.

  7. Flow Interactions and Control

    DTIC Science & Technology

    2012-03-08

    to-Use 3-D Camera For Measurements in Turbulent Flow Fields B Thurow, Auburn Near Mid Far Conventional imaging Plenoptic imaging Conventional 2...depth-of-field and blur  Reduced aperture (restricted angular information) leads to low signal levels Lightfield Imaging  Plenoptic camera records

  8. Monitoring eradication of European mouflon sheep from the Kahuku Unit of Hawai‘i Volcanoes National Park

    USGS Publications Warehouse

    Judge, Seth; Hess, Steven C.; Faford, Jonathan K.; Pacheco, Dexter; Leopold, Christina

    2017-01-01

    European mouflon (Ovis gmelini musimon), the world's smallest wild sheep, have proliferated and degraded fragile native ecosystems in the Hawaiian Islands through browsing, bark stripping, and trampling, including native forests within Hawai‘i Volcanoes National Park (HAVO). HAVO resource managers initiated ungulate control efforts in the 469 km2 Kahuku Unit after it was acquired in 2003. We tracked control effort and used aerial surveys in a 64.7 km2 area from 2004 to 2017 and more intensive ground surveys and camera-trap monitoring to detect the last remaining animals within a 25.9 km2 subunit after it was enclosed by fence in 2012. Aerial shooting yielded the most removals per unit effort (3.2 animals/ hour), resulting in 261 animals. However, ground-based methods yielded 4,607 removals overall, 3,038 of which resulted from assistance of volunteers. Ground shooting with dogs, intensive aerial shooting, ground sweeps, and forward-looking infrared (FLIR)-assisted shooting were necessary to find and remove the last remaining mouflon. The Judas technique, baiting, and trapping were not successful in attracting or detecting small numbers of remaining individuals. Effort expended to remove each mouflon increased nearly 15-fold during the last 3 yr of eradication effort from 2013 to 2016. Complementary active and passive monitoring techniques allowed us to track the effectiveness of control effort and reveal locations of small groups to staff. The effort and variety of methods required to eradicate mouflon from an enclosed unit of moderate size illustrates the difficulty of scaling up to entire populations of wild ungulates from unenclosed areas.

  9. Single-Fiber Optical Link For Video And Control

    NASA Technical Reports Server (NTRS)

    Galloway, F. Houston

    1993-01-01

    Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.

  10. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  11. UCam: universal camera controller and data acquisition system

    NASA Astrophysics Data System (ADS)

    McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.

    2010-07-01

    This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.

  12. Maximising Defence Capability Through R&D: A Review of Defence Research and Development

    DTIC Science & Technology

    2007-10-01

    Armoured Armoured Fighting Fighting Vehicles Vehicles Other 2.0% C41STAR Other 2.4% C41STAR 10.2% CBRN 5.4% Fixed Wing General Helicopters No DIS Sector... Armoured Trials Development Unit and with operational units deployed in Iraq. The Commanding Officer of the operational unit reported that the camera...other organisations, while high staff- turnover risks a lack of awareness of background R&D sponsored previously within the IPT. As a result, IPTs

  13. Middle infrared (wavelength range: 8 μm-14 μm) 2-dimensional spectroscopy (total weight with electrical controller: 1.7 kg, total cost: less than 10,000 USD) so-called hyperspectral camera for unmanned air vehicles like drones

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoyuki; Saito, Tsubasa; Ogawa, Satoru; Ishimaru, Ichiro

    2016-05-01

    We developed the palm size (optical unit: 73[mm]×102[mm]×66[mm]) and light weight (total weight with electrical controller: 1.7[kg]) middle infrared (wavelength range: 8[μm]-14[μm]) 2-dimensional spectroscopy for UAV (Unmanned Air Vehicle) like drone. And we successfully demonstrated the flights with the developed hyperspectral camera mounted on the multi-copter so-called drone in 15/Sep./2015 at Kagawa prefecture in Japan. We had proposed 2 dimensional imaging type Fourier spectroscopy that was the near-common path temporal phase-shift interferometer. We install the variable phase shifter onto optical Fourier transform plane of infinity corrected imaging optical systems. The variable phase shifter was configured with a movable mirror and a fixed mirror. The movable mirror was actuated by the impact drive piezo-electric device (stroke: 4.5[mm], resolution: 0.01[μm], maker: Technohands Co.,Ltd., type:XDT50-45, price: around 1,000USD). We realized the wavefront division type and near common path interferometry that has strong robustness against mechanical vibrations. Without anti-mechanical vibration systems, the palm-size Fourier spectroscopy was realized. And we were able to utilize the small and low-cost middle infrared camera that was the micro borometer array (un-cooled VOxMicroborometer, pixel array: 336×256, pixel pitch: 17[μm], frame rate 60[Hz], maker: FLIR, type: Quark 336, price: around 5,000USD). And this apparatus was able to be operated by single board computer (Raspberry Pi.). Thus, total cost was less than 10,000 USD. We joined with KAMOME-PJ (Kanagawa Advanced MOdule for Material Evaluation Project) with DRONE FACTORY Corp., KUUSATSU Corp., Fuji Imvac Inc. And we successfully obtained the middle infrared spectroscopic imaging with multi-copter drone.

  14. Evolution of the SOFIA tracking control system

    NASA Astrophysics Data System (ADS)

    Fiebig, Norbert; Jakob, Holger; Pfüller, Enrico; Röser, Hans-Peter; Wiedemann, Manuel; Wolf, Jürgen

    2014-07-01

    The airborne observatory SOFIA (Stratospheric Observatory for Infrared Astronomy) is undergoing a modernization of its tracking system. This included new, highly sensitive tracking cameras, control computers, filter wheels and other equipment, as well as a major redesign of the control software. The experiences along the migration path from an aged 19" VMbus based control system to the application of modern industrial PCs, from VxWorks real-time operating system to embedded Linux and a state of the art software architecture are presented. Further, the concept is presented to operate the new camera also as a scientific instrument, in parallel to tracking.

  15. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    PubMed Central

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O.; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M.; Vitiello, Nicola

    2016-01-01

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. PMID:26861333

  16. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation.

    PubMed

    Airò Farulla, Giuseppe; Pianu, Daniele; Cempini, Marco; Cortese, Mario; Russo, Ludovico O; Indaco, Marco; Nerino, Roberto; Chimienti, Antonio; Oddo, Calogero M; Vitiello, Nicola

    2016-02-05

    Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master-slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator's hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers' hands movements.

  17. Concurrent image-based visual servoing with adaptive zooming for non-cooperative rendezvous maneuvers

    NASA Astrophysics Data System (ADS)

    Pomares, Jorge; Felicetti, Leonard; Pérez, Javier; Emami, M. Reza

    2018-02-01

    An image-based servo controller for the guidance of a spacecraft during non-cooperative rendezvous is presented in this paper. The controller directly utilizes the visual features from image frames of a target spacecraft for computing both attitude and orbital maneuvers concurrently. The utilization of adaptive optics, such as zooming cameras, is also addressed through developing an invariant-image servo controller. The controller allows for performing rendezvous maneuvers independently from the adjustments of the camera focal length, improving the performance and versatility of maneuvers. The stability of the proposed control scheme is proven analytically in the invariant space, and its viability is explored through numerical simulations.

  18. Voss with Pille TLD reader

    NASA Image and Video Library

    2001-06-26

    ISS002-E-7814 (26 June 2001) --- James S. Voss, Expedition Two flight engineer, sets up the Human Research Facility's (HRF) Dosimetric Mapping (DOSMAP) Power Distribution Unit (PDU) in Destiny. The image was taken with a digital still camera.

  19. Commander Truly on aft flight deck holding communication kit assembly (ASSY)

    NASA Technical Reports Server (NTRS)

    1983-01-01

    On aft flight deck, Commander Truly holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the onorbit station. HASSELBLAD camera is positioned on overhead window W8.

  20. Low-cost uncooled VOx infrared camera development

    NASA Astrophysics Data System (ADS)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (<3.5 cm3 in volume and <500 mW in power consumption) that costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  1. KSC-2009-2982

    NASA Image and Video Library

    2009-05-08

    CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. From the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH; the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera; and the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett

  2. KSC-2009-2981

    NASA Image and Video Library

    2009-05-08

    CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. At the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett

  3. KSC-2009-2980

    NASA Image and Video Library

    2009-05-08

    CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. From the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett

  4. Pi of the Sky full system and the new telescope

    NASA Astrophysics Data System (ADS)

    Mankiewicz, L.; Batsch, T.; Castro-Tirado, A.; Czyrkowski, H.; Cwiek, A.; Cwiok, M.; Dabrowski, R.; Jelínek, M.; Kasprowicz, G.; Majcher, A.; Majczyna, A.; Malek, K.; Nawrocki, K.; Obara, L.; Opiela, R.; Piotrowski, L. W.; Siudek, M.; Sokolowski, M.; Wawrzaszek, R.; Wrochna, G.; Zaremba, M.; Żarnecki, A. F.

    2014-12-01

    The Pi of the Sky is a system of wide field of view robotic telescopes, which search for short timescale astrophysical phenomena, especially for prompt optical GRB emission. The system was designed for autonomous operation, monitoring a large fraction of the sky to a depth of 12(m}-13({m)) and with time resolution of the order of 1 - 10 seconds. The system design and observation strategy were successfully tested with a prototype detector operational at Las Campanas Observatory, Chile from 2004-2009 and moved to San Pedro de Atacama Observatory in March 2011. In October 2010 the first unit of the final Pi of the Sky detector system, with 4 CCD cameras, was successfully installed at the INTA El Arenosillo Test Centre in Spain. In July 2013 three more units (12 CCD cameras) were commissioned and installed, together with the first one, on a new platform in INTA, extending sky coverage to about 6000 square degrees.

  5. KENNEDY SPACE CENTER, FLA. -- NASA Deputy Associate Administrator for Space Station and Shuttle Programs Michael Kostelnik (left) discusses some of the working parts inside the nose of Shuttle Discovery in Orbiter Processing Facility Bay 3 with a United Space Alliance (USA) technician (back to camera). NASA and USA Space Shuttle program management are participating in a leadership workday. The day is intended to provide management with an in-depth, hands-on look at Shuttle processing activities at KSC.

    NASA Image and Video Library

    2003-12-19

    KENNEDY SPACE CENTER, FLA. -- NASA Deputy Associate Administrator for Space Station and Shuttle Programs Michael Kostelnik (left) discusses some of the working parts inside the nose of Shuttle Discovery in Orbiter Processing Facility Bay 3 with a United Space Alliance (USA) technician (back to camera). NASA and USA Space Shuttle program management are participating in a leadership workday. The day is intended to provide management with an in-depth, hands-on look at Shuttle processing activities at KSC.

  6. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three

  7. STS-53 Discovery, OV-103, DOD Hercules digital electronic imagery equipment

    NASA Image and Video Library

    1992-04-22

    STS-53 Discovery, Orbiter Vehicle (OV) 103, Department of Defense (DOD) mission Hand-held Earth-oriented Real-time Cooperative, User-friendly, Location, targeting, and Environmental System (Hercules) spaceborne experiment equipment is documented in this table top view. HERCULES is a joint NAVY-NASA-ARMY payload designed to provide real-time high resolution digital electronic imagery and geolocation (latitude and longitude determination) of earth surface targets of interest. HERCULES system consists of (from left to right): a specially modified GRID Systems portable computer mounted atop NASA developed Playback-Downlink Unit (PDU) and the Naval Research Laboratory (NRL) developed HERCULES Attitude Processor (HAP); the NASA-developed Electronic Still Camera (ESC) Electronics Box (ESCEB) including removable imagery data storage disks and various connecting cables; the ESC (a NASA modified Nikon F-4 camera) mounted atop the NRL HERCULES Inertial Measurement Unit (HIMU) containing the three-axis ring-laser gyro.

  8. The CTIO Acquisition CCD-TV camera design

    NASA Astrophysics Data System (ADS)

    Schmidt, Ricardo E.

    1990-07-01

    A CCD-based Acquisition TV Camera has been developed at CTIO to replace the existing ISIT units. In a 60 second exposure, the new Camera shows a sixfold improvement in sensitivity over an ISIT used with a Leaky Memory. Integration times can be varied over a 0.5 to 64 second range. The CCD, contained in an evacuated enclosure, is operated at -45 C. Only the image section, an area of 8.5 mm x 6.4 mm, gets exposed to light. Pixel size is 22 microns and either no binning or 2 x 2 binning can be selected. The typical readout rates used vary between 3.5 and 9 microseconds/pixel. Images are stored in a PC/XT/AT, which generates RS-170 video. The contrast in the RS-170 frames is automatically enhanced by the software.

  9. Robots Save Soldiers' Lives Overseas (MarcBot)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Marshall Space Flight Center mobile communications platform designs for future lunar missions led to improvements to fleets of tactical robots now being deployed by U.S. Army. The Multi-function Agile Remote Control Robot (MARCbot) helps soldiers search out and identify improvised explosive devices. NASA used the MARCbots to test its mobile communications platform, and in working with it, made the robot faster while adding capabilities -- upgrading to a digital camera, encrypting the controllers and video transmission, as well as increasing the range and adding communications abilities. They also simplified the design, providing more plug-and-play sensors and replacing some of the complex electronics with more trouble-free, low-cost components. Applied Geo Technology, a tribally-owned corporation in Choctaw, Mississippi, was given the task of manufacturing the modified robots. The company is now producing 40 units per month, 300 of which have already been deployed overseas.

  10. Towards the Development of a Smart Flying Sensor: Illustration in the Field of Precision Agriculture.

    PubMed

    Hernandez, Andres; Murcia, Harold; Copot, Cosmin; De Keyser, Robin

    2015-07-10

    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology.

  11. Towards the Development of a Smart Flying Sensor: Illustration in the Field of Precision Agriculture

    PubMed Central

    Hernandez, Andres; Murcia, Harold; Copot, Cosmin; De Keyser, Robin

    2015-01-01

    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology. PMID:26184205

  12. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    NASA Technical Reports Server (NTRS)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  13. Reductions in injury crashes associated with red light camera enforcement in oxnard, california.

    PubMed

    Retting, Richard A; Kyrychenko, Sergey Y

    2002-11-01

    This study estimated the impact of red light camera enforcement on motor vehicle crashes in one of the first US communities to employ such cameras-Oxnard, California. Crash data were analyzed for Oxnard and for 3 comparison cities. Changes in crash frequencies were compared for Oxnard and control cities and for signalized and nonsignalized intersections by means of a generalized linear regression model. Overall, crashes at signalized intersections throughout Oxnard were reduced by 7% and injury crashes were reduced by 29%. Right-angle crashes, those most associated with red light violations, were reduced by 32%; right-angle crashes involving injuries were reduced by 68%. Because red light cameras can be a permanent component of the transportation infrastructure, crash reductions attributed to camera enforcement should be sustainable.

  14. The upgrade of the H.E.S.S. cameras

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-01-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes (IACT) located in Namibia. In order to assure the continuous operation of H.E.S.S. at its full sensitivity until and possibly beyond the advent of CTA, the older cameras, installed in 2003, are currently undergoing an extensive upgrade. Its goals are reducing the system failure rate, reducing the dead time and improving the overall performance of the array. All camera components have been upgraded, except the mechanical structure and the photo-multiplier tubes (PMTs). Novel technical solutions have been introduced: the upgraded readout electronics is based on the NECTAr analog memory chip; the control of the hardware is carried out by an FPGA coupled to an embedded ARM computer; the control software was re-written from scratch and it is based on modern C++ open source libraries. These hardware and software solutions offer very good performance, robustness and flexibility. The first camera was fielded in July 2015 and has been successfully commissioned; the rest is scheduled to be upgraded in September 2016. The present contribution describes the design, the testing and the performance of the new H.E.S.S. camera and its components.

  15. Design of low noise imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for low noise imaging system under the mode of global shutter, a complete imaging system is designed based on the SCMOS (Scientific CMOS) image sensor CIS2521F. The paper introduces hardware circuit and software system design. Based on the analysis of key indexes and technologies about the imaging system, the paper makes chips selection and decides SCMOS + FPGA+ DDRII+ Camera Link as processing architecture. Then it introduces the entire system workflow and power supply and distribution unit design. As for the software system, which consists of the SCMOS control module, image acquisition module, data cache control module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The imaging experimental results show that the imaging system exhibits a 2560*2160 pixel resolution, has a maximum frame frequency of 50 fps. The imaging quality of the system satisfies the requirement of the index.

  16. A new acquisition and imaging system for environmental measurements: an experience on the Italian cultural heritage.

    PubMed

    Leccese, Fabio; Cagnetti, Marco; Calogero, Andrea; Trinca, Daniele; di Pasquale, Stefano; Giarnetti, Sabino; Cozzella, Lorenzo

    2014-05-23

    A new acquisition system for remote control of wall paintings has been realized and tested in the field. The system measures temperature and atmospheric pressure in an archeological site where a fresco has been put under control. The measuring chain has been designed to be used in unfavorable environments where neither electric power nor telecommunication infrastructures are available. The environmental parameters obtained from the local monitoring are then transferred remotely allowing an easier management by experts in the field of conservation of cultural heritage. The local acquisition system uses an electronic card based on microcontrollers and sends the data to a central unit realized with a Raspberry-Pi. The latter manages a high quality camera to pick up pictures of the fresco. Finally, to realize the remote control at a site not reached by internet signals, a WiMAX connection based on different communication technologies such as WiMAX, Ethernet, GPRS and Satellite, has been set up.

  17. The PALM-3000 high-order adaptive optics system for Palomar Observatory

    NASA Astrophysics Data System (ADS)

    Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa

    2008-07-01

    Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.

  18. A New Acquisition and Imaging System for Environmental Measurements: An Experience on the Italian Cultural Heritage

    PubMed Central

    Leccese, Fabio; Cagnetti, Marco; Calogero, Andrea; Trinca, Daniele; di Pasquale, Stefano; Giarnetti, Sabino; Cozzella, Lorenzo

    2014-01-01

    A new acquisition system for remote control of wall paintings has been realized and tested in the field. The system measures temperature and atmospheric pressure in an archeological site where a fresco has been put under control. The measuring chain has been designed to be used in unfavorable environments where neither electric power nor telecommunication infrastructures are available. The environmental parameters obtained from the local monitoring are then transferred remotely allowing an easier management by experts in the field of conservation of cultural heritage. The local acquisition system uses an electronic card based on microcontrollers and sends the data to a central unit realized with a Raspberry-Pi. The latter manages a high quality camera to pick up pictures of the fresco. Finally, to realize the remote control at a site not reached by internet signals, a WiMAX connection based on different communication technologies such as WiMAX, Ethernet, GPRS and Satellite, has been set up. PMID:24859030

  19. The Large Synoptic Survey Telescope OCS and TCS models

    NASA Astrophysics Data System (ADS)

    Schumacher, German; Delgado, Francisco

    2010-07-01

    The Large Synoptic Survey Telescope (LSST) is a project envisioned as a system of systems with demanding science, technical, and operational requirements, that must perform as a fully integrated unit. The design and implementation of such a system poses big engineering challenges when performing requirements analysis, detailed interface definitions, operational modes and control strategy studies. The OMG System Modeling Language (SysML) has been selected as the framework for the systems engineering analysis and documentation for the LSST. Models for the overall system architecture and different observatory subsystems have been built describing requirements, structure, interfaces and behavior. In this paper we show the models for the Observatory Control System (OCS) and the Telescope Control System (TCS), and how this methodology has helped in the clarification of the design and requirements. In one common language, the relationships of the OCS, TCS, Camera and Data management subsystems are captured with models of the structure, behavior, requirements and the traceability between them.

  20. Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

Top