Sample records for front hazard-identification camera

  1. Opportunity Stretches Out 3-D

    NASA Image and Video Library

    2004-02-02

    This is a three-dimensional stereo anaglyph of an image taken by the front hazard-identification camera onboard NASA Mars Exploration Rover Opportunity, showing the rover arm in its extended position. 3D glasses are necessary to view this image.

  2. Slow Progress in Dune (Left Front Wheel)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The left front wheel of NASA's Mars Exploration Rover Opportunity makes slow but steady progress through soft dune material in this movie clip of frames taken by the rover's front hazard identification camera over a period of several days. The sequence starts on Opportunity's 460th martian day, or sol (May 10, 2005) and ends 11 days later. In eight drives during that period, Opportunity advanced a total of 26 centimeters (10 inches) while spinning its wheels enough to have driven 46 meters (151 feet) if there were no slippage. The motion appears to speed up near the end of the clip, but that is an artifact of individual frames being taken less frequently.

  3. Twelve Months in Two Minutes Curiositys First Year on Mars

    NASA Image and Video Library

    2013-08-01

    A series of 548 images shows the view from a fisheye camera on the front of NASA's Mars rover Curiosity from the day the rover landed in August 2012 through July 2013. The camera is the rover's front Hazard-Avoidance Camera. The scenes include Curiosity collecting its first scoops of Martian soil and collecting a drilled sample form inside a Martian rock.

  4. Approaching Rock Target No. 1

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This 3-D stereo anaglyph image was taken by the Mars Exploration Rover Spirit front hazard-identification camera after the rover's first post-egress drive on Mars Sunday. Engineers drove the rover approximately 3 meters (10 feet) from the Columbia Memorial Station toward the first rock target, seen in the foreground. The football-sized rock was dubbed Adirondack because of its mountain-shaped appearance. Scientists plan to use instruments at the end of the rover's robotic arm to examine the rock and understand how it formed.

  5. Adirondack Under the Microscope

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image was taken by the Mars Exploration Rover Spirit front hazard-identification camera after the rover's first post-egress drive on Mars Sunday, Jan. 15, 2004. Engineers drove the rover approximately 3 meters (10 feet) from the Columbia Memorial Station toward the first rock target, seen in the foreground. The football-sized rock was dubbed Adirondack because of its mountain-shaped appearance. Scientists have begun using the microscopic imager instrument at the end of the rover's robotic arm to examine the rock and understand how it formed.

  6. Forging Ahead

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation shows the front view from the Mars Exploration Rover Opportunity as it drives north towards the eastern edge of the rock outcropping near its landing site at Meridiani Planum, Mars. The movie strings together images taken over the past six martian days, or sols, of its journey, beginning with a 1 meter (3 feet) stroll away from the lander on sol 7. On the 12th sol, Opportunity drove another 3 1/2 meters (11 feet), and then, one sol later, another 1 1/2 meters (5 feet). On its way, the rover twisted and turned in a test of its driving capabilities. This movie is made up of fish-eye images taken by the rover's front hazard-identification camera.

  7. Forging Ahead (linearized)

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation shows the front view from the Mars Exploration Rover Opportunity as it drives north towards the eastern edge of the rock outcropping near its landing site at Meridiani Planum, Mars. The movie strings together images taken over the past six martian days, or sols, of its journey, beginning with a 1 meter (3 feet) stroll away from the lander on sol 7. On the 12th sol, Opportunity drove another 3 1/2 meters (11 feet), and then, one sol later, another 1 1/2 meters (5 feet). On its way, the rover twisted and turned in a test of its driving capabilities. This movie is made up of images taken by the rover's front hazard-identification camera, which were corrected for fish-eye distortion.

  8. Inside Victoria Crater for Extended Exploration

    NASA Technical Reports Server (NTRS)

    2007-01-01

    After a finishing an in-and-out maneuver to check wheel slippage near the rim of Victoria Crater, NASA's Mars Exploration Rover Opportunity re-entered the crater during the rover's 1,293rd Martian day, or sol, (Sept. 13, 2007) to begin a weeks-long exploration of the inner slope.

    Opportunity's front hazard-identification camera recorded this wide-angle view looking down into and across the crater at the end of the day's drive. The rover's position was about six meters (20 feet) inside the rim, in the 'Duck Bay' alcove of the crater.

  9. Impediment to Spirit Drive on Sol 1806

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The hazard avoidance camera on the front of NASA's Mars Exploration Rover Spirit took this image after a drive by Spirit on the 1,806th Martian day, or sol, (January 31, 2009) of Spirit's mission on the surface of Mars.

    The wheel at the bottom right of the image is Spirit's right-front wheel. Because that wheel no longer turns, Spirit drives backwards dragging that wheel. The drive on Sol 1806 covered about 30 centimeters (1 foot). The rover team had planned a longer drive, but Spirit stopped short, apparently from the right front wheel encountering the partially buried rock visible next to that wheel.

    The hazard avoidance cameras on the front and back of the rover provide wide-angle views. The hill on the horizon in the right half of this image is Husband Hill. Spirit reached the summit of Husband Hill in 2005.

  10. After Opportunity's First Drive in Six Weeks

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA's Mars Exploration Rover Opportunity used its front hazard-identification camera to obtain this image at the end of a drive on the rover's 1,271st sol, or Martian day (Aug. 21, 2007).

    Due to sun-obscuring dust storms limiting the rover's supply of solar energy, Opportunity had not driven since sol 1,232 (July 12, 2007). On sol 1,271, after the sky above Opportunity had been gradually clearing for more than two weeks, the rover rolled 13.38 meters (44 feet). Wheel tracks are visible in front of the rover because the drive ended with a short test of driving backwards.

    Opportunity's turret of four tools at the end of the robotic arm fills the center of the image. Victoria Crater, site of the rover's next science targets, lies ahead.

  11. Opportunity at Work Inside Victoria Crater

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA Mars Exploration Rover Opportunity used its front hazard-identification camera to capture this wide-angle view of its robotic arm extended to a rock in a bright-toned layer inside Victoria Crater.

    The image was taken during the rover's 1,322nd Martian day, or sol (Oct. 13, 2007).

    Victoria Crater has a scalloped shape of alternating alcoves and promontories around the crater's circumference. Opportunity descended into the crater two weeks earlier, within an alcove called 'Duck Bay.' Counterclockwise around the rim, just to the right of the arm in this image, is a promontory called 'Cabo Frio.'

  12. Curiosity View From Below

    NASA Image and Video Library

    2012-08-17

    The Curiosity engineering team created this cylindrical projection view from images taken by NASA Curiosity rover front hazard avoidance cameras underneath the rover deck on Sol 0. Pictured here are are the pigeon-toed the wheels.

  13. Turning in the Testbed

    NASA Image and Video Library

    2004-01-13

    This image, taken in the JPL In-Situ Instruments Laboratory or Testbed, shows the view from the front hazard avoidance cameras on the Mars Exploration Rover Spirit after the rover has backed up and turned 45 degrees counterclockwise.

  14. View From Camera Not Used During Curiosity's First Six Months on Mars

    NASA Image and Video Library

    2017-12-08

    This view of Curiosity's left-front and left-center wheels and of marks made by wheels on the ground in the "Yellowknife Bay" area comes from one of six cameras used on Mars for the first time more than six months after the rover landed. The left Navigation Camera (Navcam) linked to Curiosity's B-side computer took this image during the 223rd Martian day, or sol, of Curiosity's work on Mars (March 22, 2013). The wheels are 20 inches (50 centimeters) in diameter. Curiosity carries a pair of main computers, redundant to each other, in order to have a backup available if one fails. Each of the computers, A-side and B-side, also has other redundant subsystems linked to just that computer. Curiosity operated on its A-side from before the August 2012 landing until Feb. 28, when engineers commanded a switch to the B-side in response to a memory glitch on the A-side. One set of activities after switching to the B-side computer has been to check the six engineering cameras that are hard-linked to that computer. The rover's science instruments, including five science cameras, can each be operated by either the A-side or B-side computer, whichever is active. However, each of Curiosity's 12 engineering cameras is linked to just one of the computers. The engineering cameras are the Navigation Camera (Navcam), the Front Hazard-Avoidance Camera (Front Hazcam) and Rear Hazard-Avoidance Camera (Rear Hazcam). Each of those three named cameras has four cameras as part of it: two stereo pairs of cameras, with one pair linked to each computer. Only the pairs linked to the active computer can be used, and the A-side computer was active from before landing, in August, until Feb. 28. All six of the B-side engineering cameras have been used during March 2013 and checked out OK. Image Credit: NASA/JPL-Caltech NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  15. Curiosity Drill in Place for Load Testing Before Drilling

    NASA Image and Video Library

    2013-01-28

    The percussion drill in the turret of tools at the end of the robotic arm of NASA Mars rover Curiosity has been positioned in contact with the rock surface in this image from the rover front Hazard-Avoidance Camera Hazcam.

  16. Up-Close Look at 'Bread-Basket'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Mars Exploration Rover Spirit took this image with its front hazard-avoidance camera on sol 175 (June 30, 2004). It captures the instrument deployment device in perfect position as the rover uses its microscopic imager to get an up-close look at the rock target 'Bread-Basket.'

  17. Institutionalizing fire safety in making land use and development decisions

    Treesearch

    Marie-Annette Johnson; Marc Mullenix

    1995-01-01

    Because of three major wildland fires in the past 5 years along the Front Range of the Boulder County area in Colorado, current and potential residents should be told of steps that can reduce the risks of these fire hazards. The Wildfire Hazard Identification and Mitigation System (WHIMS) is used by the county and city to assist in the identification and mitigation of...

  18. Cutting the Cord

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation shows the view from the front hazard avoidance cameras on the Mars Exploration Rover Spirit as the rover turns 45 degrees clockwise. This maneuver is the first step in a 3-point turn that will rotate the rover 115 degrees to face west. The rover must make this turn before rolling off the lander because airbags are blocking it from exiting off the front lander petal. Before this crucial turn could take place, engineers instructed the rover to cut the final cord linking it to the lander. The turn took around 30 minutes to complete.

  19. Numerical analysis of wavefront measurement characteristics by using plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun

    2016-01-01

    To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.

  20. Peeling Back the Layers of Mars

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This is a 3-D model of the trench excavated by the Mars Exploration Rover Opportunity on the 23rd day, or sol, of its mission. An oblique view of the trench from a bit above and to the right of the rover's right wheel is shown. The model was generated from images acquired by the rover's front hazard-avoidance cameras.

  1. Bright Soil Churned by Spirit's Sol 1861 Drive

    NASA Technical Reports Server (NTRS)

    2009-01-01

    NASA's Mars Exploration Rover Spirit drove 22.7 meters (74 feet) toward the southwest on the 1,861st Martian day, or sol, of Spirit's mission on Mars (March 28, 2009). After the drive, the rover took this image with its front hazard-avoidance camera, looking back at the tracks from the drive.

    As usual since losing the use of its right-front wheel in 2006, Spirit drove backwards. The immobile right-front wheel churned up a long stripe of bright soil during this drive. Where Spirit has found such bright soil in the past, subsequent analysis of the composition found concentrations of sulfur or silica that testified to past action of water at the site. When members of the rover team saw the large quantity of bright soil exposed by the Sol 1861 drive, they quickly laid plans to investigate the composition with Spirit's alpha particle X-ray spectrometer.

    The Sol 1861 drive took the rover past the northwest corner of the low plateau called 'Home Plate,' making progress on a route around the western side of Home Plate. The edge of Home Plate forms the horizon on the right side of this image. Husband Hill is on the horizon on the left side. For scale, the parallel rover wheel tracks are about 1 meter (40 inches) apart. The rover's hazard-avoidance cameras take 'fisheye' wide-angle images.

  2. Opportunity's First Dip into Victoria Crater

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA's Mars Exploration Rover Opportunity entered Victoria Crater during the rover's 1,291st Martian day, or sol, (Sept. 11, 2007). The rover team commanded Opportunity to drive just far enough into the crater to get all six wheels onto the inner slope, and then to back out again and assess how much the wheels slipped on the slope. The driving commands for the day included a precaution for the rover to stop driving if the wheels were slipping more than 40 percent. Slippage exceeded that amount on the last step of the drive, so Opportunity stopped with its front pair of wheels still inside the crater. The rover team planned to assess results of the drive, then start Opportunity on an extended exploration inside the crater.

    This wide-angle view taken by Opportunity's front hazard-identification camera at the end of the day's driving shows the wheel tracks created by the short dip into the crater. The left half of the image looks across an alcove informally named 'Duck Bay' toward a promontory called 'Cape Verde' clockwise around the crater wall. The right half of the image looks across the main body of the crater, which is 800 meters (half a mile) in diameter.

  3. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  4. Mind of Its Own

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation shows the path the Mars Exploration Rover Spirit traveled during its 24-meter (78.7-foot) autonomous drive across the bumpy terrain at Gusev Crater, Mars, on the 39th day, or sol, of its mission. The colored data are from the rover's hazard-avoidance camera and have been reconstructed to show the topography of the land. Red areas indicate extremely hazardous terrain, and green patches denote safe, smooth ground. At the end of its drive, Spirit decided it was safer to back up then go forward. The rover is now positioned directly in front of its target, a rock dubbed Stone Council.

  5. At Bright Band Inside Victoria Crater

    NASA Technical Reports Server (NTRS)

    2007-01-01

    A layer of light-toned rock exposed inside Victoria Crater in the Meridiani Planum region of Mars appears to mark where the surface was at the time, many millions of years ago, when an impact excavated the crater. NASA's Mars Exploration Rover Opportunity drove to this bright band as the science team's first destination for the rover during investigations inside the crater.

    Opportunity's left front hazard-identification camera took this image just after the rover finished a drive of 2.25 meters (7 feet, 5 inches) during the rover's 1,305th Martian day, or sol, (Sept. 25, 2007). The rocks beneath the rover and its extended robotic arm are part of the bright band.

    Victoria Crater has a scalloped shape of alternating alcoves and promontories around the crater's circumference. Opportunity descended into the crater two weeks earlier, within an alcove called 'Duck Bay.' Counterclockwise around the rim, just to the right of the arm in this image, is a promontory called 'Cabo Frio.'

  6. Method and system for providing autonomous control of a platform

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)

    2012-01-01

    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).

  7. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  8. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  9. KENNEDY SPACE CENTER, FLA. - The Window Observational Research Facility (WORF), seen in the Space Station Processing Facility, was designed and built by the Boeing Co. at NASA’s Marshall Space Flight Center in Huntsville, Ala. WORF will be delivered to the International Space Station and placed in the rack position in front of the Destiny lab window, providing locations for attaching cameras, multi-spectral scanners and other instruments. WORF will support a variety of scientific and commercial experiments in areas of Earth systems and processes, global ecological changes in Earth’s biosphere, lithosphere, hydrosphere and climate system, Earth resources, natural hazards, and education. After installation, it will become a permanent focal point for Earth Science research aboard the space station.

    NASA Image and Video Library

    2003-09-08

    KENNEDY SPACE CENTER, FLA. - The Window Observational Research Facility (WORF), seen in the Space Station Processing Facility, was designed and built by the Boeing Co. at NASA’s Marshall Space Flight Center in Huntsville, Ala. WORF will be delivered to the International Space Station and placed in the rack position in front of the Destiny lab window, providing locations for attaching cameras, multi-spectral scanners and other instruments. WORF will support a variety of scientific and commercial experiments in areas of Earth systems and processes, global ecological changes in Earth’s biosphere, lithosphere, hydrosphere and climate system, Earth resources, natural hazards, and education. After installation, it will become a permanent focal point for Earth Science research aboard the space station.

  10. KENNEDY SPACE CENTER, FLA. - Workers in the Space Station Processing Facility check out the Window Observational Research Facility (WORF), designed and built by the Boeing Co. at NASA’s Marshall Space Flight Center in Huntsville, Ala. WORF will be delivered to the International Space Station and placed in the rack position in front of the Destiny lab window, providing locations for attaching cameras, multi-spectral scanners and other instruments. WORF will support a variety of scientific and commercial experiments in areas of Earth systems and processes, global ecological changes in Earth’s biosphere, lithosphere, hydrosphere and climate system, Earth resources, natural hazards, and education. After installation, it will become a permanent focal point for Earth Science research aboard the space station.

    NASA Image and Video Library

    2003-09-08

    KENNEDY SPACE CENTER, FLA. - Workers in the Space Station Processing Facility check out the Window Observational Research Facility (WORF), designed and built by the Boeing Co. at NASA’s Marshall Space Flight Center in Huntsville, Ala. WORF will be delivered to the International Space Station and placed in the rack position in front of the Destiny lab window, providing locations for attaching cameras, multi-spectral scanners and other instruments. WORF will support a variety of scientific and commercial experiments in areas of Earth systems and processes, global ecological changes in Earth’s biosphere, lithosphere, hydrosphere and climate system, Earth resources, natural hazards, and education. After installation, it will become a permanent focal point for Earth Science research aboard the space station.

  11. Design and verification for front mirror-body structure of on-axis three mirror anastigmatic space camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Guo, Chongling; Hu, Yongli; He, Hongyan

    2017-11-01

    The primary and secondary mirrors of onaxis three mirror anastigmatic (TMA) space camera are connected and supported by its front mirror-body structure, which affects both imaging performance and stability of the camera. In this paper, the carbon fiber reinforced plastics (CFRP) thin-walled cylinder and titanium alloy connecting rod have been used for the front mirror-body opto-mechanical structure of the long-focus on-axis and TMA space camera optical system. The front mirror-body component structure has then been optimized by finite element analysis (FEA) computing. Each performance of the front mirror-body structure has been tested by mechanics and vacuum experiments in order to verify the validity of such structure engineering design.

  12. Mars Rover Studies Soil on Mars

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Both out on the plains of Gusev Crater and in the 'Columbia Hills,' NASA's Mars Exploration Rover Spirit has encountered a thin (approximately 1 millimeter or 0.04 inch thick), light-colored, fine-grained layer of material on top of a dark-colored, coarser layer of soil. In the hills, Spirit stopped to take a closer look at soil compacted by one of the rover's wheels. Spirit took this image with the front hazard-avoidance camera during the rover's 314th martian day, or sol (Nov. 19, 2004).

  13. Greenland outlet glacier dynamics from Extreme Ice Survey (EIS) photogrammetry

    NASA Astrophysics Data System (ADS)

    Hawbecker, P.; Box, J. E.; Balog, J. D.; Ahn, Y.; Benson, R. J.

    2010-12-01

    Time Lapse cameras fill gaps in our observational capabilities: 1. By providing much higher temporal resolution than offered by conventional airborne or satellite remote sensing. 2. While GPS or auto-theodolite observations can provide higher time resolution data than from photogrammetry, survival of these instruments on the hazardous glacier surface is limited, plus, the maintenance of such systems can be more expensive than the maintenance of a terrestrial photogrammetry installation. 3. Imagery provide a high spatial density of observations across the glacier surface, higher than is realistically available from GPS or other in-situ observations. 4. time lapse cameras provide observational capabilities in Eulerian and Lagrangian frames while GPS or theodolite targets, going along for a ride on the glacier, provide only Lagrangian data. Photogrammetry techniques are applied to a year-plus of images from multiple west Greenland glaciers to determine the glacier front horizontal velocity variations at hourly to seasonal time scales. The presentation includes comparisons between glacier front velocities and: 1. surface melt rates inferred from surface air temperature and solar radiation observations; 2. major calving events identified from camera images; 3. surface and near-surface ocean temperature; 4. land-fast sea ice breakup; 5. tidal variations; 6. supra-glacial melt lake drainage events observed in daily optical satellite imagery; and 7.) GPS data. Extreme Ice Survey (EIS) time lapse camera overlooking the Petermann glacier, installed to image glacier dynamics and to capture the predicted ice "island" detachment.

  14. Identification of Absorption, Distribution, Metabolism, and Excretion (ADME) Genes Relevant to Steatosis Using a Systems Biology Approach

    EPA Science Inventory

    Ensuring chemical safety and sustainability form a main priority of the U.S. Environmental Protection Agency. This entails efforts on multiple fronts to characterize the potential hazard posed by chemicals currently in use and those to be commercialized in the future. The use of ...

  15. Techniques for development of safety-related software for surgical robots.

    PubMed

    Varley, P

    1999-12-01

    Regulatory bodies require evidence that software controlling potentially hazardous devices is developed to good manufacturing practices. Effective techniques used in other industries assume long timescales and high staffing levels and can be unsuitable for use without adaptation in developing electronic healthcare devices. This paper discusses a set of techniques used in practice to develop software for a particular innovative medical product, an endoscopic camera manipulator. These techniques include identification of potential hazards and tracing their mitigating factors through the project lifecycle.

  16. Cutting the Cord-2

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation shows the view from the rear hazard avoidance cameras on the Mars Exploration Rover Spirit as the rover turns 45 degrees clockwise. This maneuver is the first step in a 3-point turn that will rotate the rover 115 degrees to face west. The rover must make this turn before rolling off the lander because airbags are blocking it from exiting from the front lander petal. Before this crucial turn took place, engineers instructed the rover to cut the final cord linking it to the lander. The turn took around 30 minutes to complete.

  17. Location of Spirit's Home

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image shows where Earth would set on the martian horizon from the perspective of the Mars Exploration Rover Spirit if it were facing northwest atop its lander at Gusev Crater. Earth cannot be seen in this image, but engineers have mapped its location. This image mosaic was taken by the hazard-identification camera onboard Spirit.

  18. Building, north side (original front), detail of original entrance. Camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Building, north side (original front), detail of original entrance. Camera facing south - Naval Supply Center, Broadway Complex, Administration Storehouse, 911 West Broadway, San Diego, San Diego County, CA

  19. Spirit Begins Third Martian Year

    NASA Technical Reports Server (NTRS)

    2007-01-01

    As it finished its second Martian year on Mars, NASA's Mars Exploration Rover Spirit was beginning to examine a group of angular rocks given informal names corresponding to peaks in the Colorado Rockies. A Martian year the amount of time it takes Mars to complete one orbit around the sun lasts for 687 Earth days. Spirit completed its second Martian year on the rover's 1,338th Martian day, or sol, corresponding to Oct. 8, 2007.

    Two days later, on sol 1,340 (Oct. 10, 2007), Spirit used its front hazard-identification camera to capture this wide-angle view of its robotic arm extended to a rock informally named 'Humboldt Peak.' For the rocks at this site on the southern edge of the 'Home Plate' platform in the inner basin of the Columbia Hills inside Gusev Crater, the rover team decided to use names of Colorado peaks higher than 14,000 feet. The Colorado Rockies team of the National League is the connection to the baseball-theme nomenclature being used for features around Home Plate.

    The tool facing Spirit on the turret at the end of the robotic arm is the Moessbauer spectrometer.

  20. Multipurpose Hyperspectral Imaging System

    NASA Technical Reports Server (NTRS)

    Mao, Chengye; Smith, David; Lanoue, Mark A.; Poole, Gavin H.; Heitschmidt, Jerry; Martinez, Luis; Windham, William A.; Lawrence, Kurt C.; Park, Bosoon

    2005-01-01

    A hyperspectral imaging system of high spectral and spatial resolution that incorporates several innovative features has been developed to incorporate a focal plane scanner (U.S. Patent 6,166,373). This feature enables the system to be used for both airborne/spaceborne and laboratory hyperspectral imaging with or without relative movement of the imaging system, and it can be used to scan a target of any size as long as the target can be imaged at the focal plane; for example, automated inspection of food items and identification of single-celled organisms. The spectral resolution of this system is greater than that of prior terrestrial multispectral imaging systems. Moreover, unlike prior high-spectral resolution airborne and spaceborne hyperspectral imaging systems, this system does not rely on relative movement of the target and the imaging system to sweep an imaging line across a scene. This compact system (see figure) consists of a front objective mounted at a translation stage with a motorized actuator, and a line-slit imaging spectrograph mounted within a rotary assembly with a rear adaptor to a charged-coupled-device (CCD) camera. Push-broom scanning is carried out by the motorized actuator which can be controlled either manually by an operator or automatically by a computer to drive the line-slit across an image at a focal plane of the front objective. To reduce the cost, the system has been designed to integrate as many as possible off-the-shelf components including the CCD camera and spectrograph. The system has achieved high spectral and spatial resolutions by using a high-quality CCD camera, spectrograph, and front objective lens. Fixtures for attachment of the system to a microscope (U.S. Patent 6,495,818 B1) make it possible to acquire multispectral images of single cells and other microscopic objects.

  1. Spirit Wiggles into Position

    NASA Technical Reports Server (NTRS)

    2005-01-01

    NASA's Mars Exploration Rover Spirit completed a difficult, rocky ascent en route to reaching a captivating rock outcrop nicknamed 'Hillary' at the summit of 'Husband Hill.' At the end of the climb the robotic geologist was tilted almost 30 degrees. To get the rover on more solid footing for deploying the instrument arm, rover drivers told Spirit to wiggle its wheels one at a time. This animation shows Spirit's position before and after completing the wheel wiggle, during which the rover slid approximately 1 centimeter (0.4 inch) downhill. Rover drivers decided this position was too hazardous for deploying the instrument arm and subsequently directed Spirit to a more stable position before conducting analyses with instruments on the rover's arm.

    Spirit took these images with its front hazard-avoidance camera on martian day, or sol, 625 (Oct. 6, 2005).

  2. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  3. On HMI's Mod-L Sequence: Test and Evaluation

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Baldner, Charles; Bogart, R. S.; Bush, R.; Couvidat, S.; Duvall, Thomas L.; Hoeksema, Jon Todd; Norton, Aimee Ann; Scherrer, Philip H.; Schou, Jesper

    2016-05-01

    HMI Mod-L sequence can produce full Stokes parameters at a cadence of 90 seconds by combining filtergrams from both cameras, the front camera and the side camera. Within the 90-second, the front camera takes two sets of Left and Right Circular Polarizations (LCP and RCP) at 6 wavelengths; the side camera takes one set of Linear Polarizations (I+/-Q and I+/-U) at 6 wavelengths. By combining two cameras, one can obtain full Stokes parameters of [I,Q,U,V] at 6 wavelengths in 90 seconds. In norminal Mod-C sequence that HMI currently uses, the front camera takes LCP and RCP at a cadence of 45 seconds, while the side camera takes observation of the full Stokes at a cadence of 135 seconds. Mod-L should be better than Mod-C for providing vector magnetic field data because (1) Mod-L increases cadence of full Stokes observation, which leads to higher temporal resolution of vector magnetic field measurement; (2) decreases noise in vector magnetic field data because it uses more filtergrams to produce [I,Q,U,V]. There are two potential issues in Mod-L that need to be addressed: (1) scaling intensity of the two cameras’ filtergrams; and (2) if current polarization calibration model, which is built for each camera separately, works for the combined data from both cameras. This presentation will address these questions, and further place a discussion here.

  4. Spirit Switches on Its X-ray Vision

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image shows the Mars Exploration Rover Spirit probing its first target rock, Adirondack. At the time this picture was snapped, the rover had begun analyzing the rock with the alpha particle X-ray spectrometer located on its robotic arm. This instrument uses alpha particles and X-rays to determine the elemental composition of martian rocks and soil. The image was taken by the rover's hazard-identification camera.

  5. FIDO prototype Mars rover field trials, Black Rock Summit, Nevada, as test of the ability of robotic mobility systems to conduct field science

    NASA Astrophysics Data System (ADS)

    Arvidson, R. E.; Squyres, S. W.; Baumgartner, E. T.; Schenker, P. S.; Niebur, C. S.; Larsen, K. W.; SeelosIV, F. P.; Snider, N. O.; Jolliff, B. L.

    2002-08-01

    The Field Integration Design and Operations (FIDO) prototype Mars rover was deployed and operated remotely for 2 weeks in May 2000 in the Black Rock Summit area of Nevada. The blind science operation trials were designed to evaluate the extent to which FIDO-class rovers can be used to conduct traverse science and collect samples. FIDO-based instruments included stereo cameras for navigation and imaging, an infrared point spectrometer, a color microscopic imager for characterization of rocks and soils, and a rock drill for core acquisition. Body-mounted ``belly'' cameras aided drill deployment, and front and rear hazard cameras enabled terrain hazard avoidance. Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, a high spatial resolution IKONOS orbital image, and a suite of descent images were used to provide regional- and local-scale terrain and rock type information, from which hypotheses were developed for testing during operations. The rover visited three sites, traversed 30 m, and acquired 1.3 gigabytes of data. The relatively small traverse distance resulted from a geologically rich site in which materials identified on a regional scale from remote-sensing data could be identified on a local scale using rover-based data. Results demonstrate the synergy of mapping terrain from orbit and during descent using imaging and spectroscopy, followed by a rover mission to test inferences and to make discoveries that can be accomplished only with surface mobility systems.

  6. Opportunity Examining Composition of 'Cook Islands' Outcrop

    NASA Technical Reports Server (NTRS)

    2009-01-01

    This image taken by the front hazard-avoidance camera on NASA's Mars Exploration Rover Opportunity shows the rover's arm extended to examine the composition of a rock using the alpha particle X-ray spectrometer.

    Opportunity took this image during the 1,826th Martian day, or sol, of the rover's Mars-surface mission (March 13, 2009).

    The spectrometer is at a target called 'Penrhyn,' on a rock called 'Cook Islands.' As Opportunity makes its way on a long journey from Victoria Crater toward Endeavour Crater, the team is stopping the drive occasionally on the route to check whether the rover finds a trend in the composition of rock exposures.

  7. Visiting the Scene of Landing

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image from the Mars Exploration Rover Opportunity's front hazard-avoidance camera focuses on the rock dubbed 'Bounce,' which the rover's airbag-wrapped lander hit upon landing. Though the plains surrounding Opportunity's 'Eagle Crater' landing site are relatively free of any hazards that would have hindered landing, the packaged rover managed to bounce down on one of the only rocks in the vicinity. The rock measures approximately 40 centimeters (about 16 inches) across.

    Bounce -- a rock that differs significantly from the light rocks in the Eagle Crater outcrop -- is currently being investigated by Opportunity. So far, the rover's miniature thermal emission spectrometer has revealed that it is rich in hematite. In the coming sols, a target yet to be chosen on the rock will be examined by the rover's spectrometers, then ground into by the rock abrasion tool. After the grind, the spectrometers will assess the chemical content of the exposed rock.

  8. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  9. A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.

    2009-01-01

    The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.

  10. After a Spirit Drive West of Home Plate

    NASA Image and Video Library

    2009-04-20

    NASA's Mars Exploration Rover Spirit drove 6.98 meters (22.9 feet) southeastward on the 1,871st Martian day, or sol, of the rover's mission on Mars (April 8, 2009). As usual since losing the use of its right-front wheel in 2006, Spirit drove backward, dragging the immobile wheel. The rover used its front hazard-avoidance camera after the drive to capture this view looking back at the ground covered. For scale, the distance between the parallel wheel tracks is about 1 meter (40 inches). The drive added to progress in trekking counterclockwise around a low plateau called "Home Plate." Spirit is driving through a valley on the west side of the plateau. Home Plate is not within this image. The hill on the horizon in the upper right is Husband Hill, the summit of which is about 750 meters (nearly half a mile) to the north of Spirit's position. Following this drive, Spirit experienced difficulties that prevented driving during the subsequent week. http://photojournal.jpl.nasa.gov/catalog/PIA11990

  11. A Real-Time Augmented Reality System to See-Through Cars.

    PubMed

    Rameau, Francois; Ha, Hyowon; Joo, Kyungdon; Choi, Jinsoo; Park, Kibaek; Kweon, In So

    2016-11-01

    One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicle's driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.

  12. Infrared detection, recognition and identification of handheld objects

    NASA Astrophysics Data System (ADS)

    Adomeit, Uwe

    2012-10-01

    A main criterion for comparison and selection of thermal imagers for military applications is their nominal range performance. This nominal range performance is calculated for a defined task and standardized target and environmental conditions. The only standardization available to date is STANAG 4347. The target defined there is based on a main battle tank in front view. Because of modified military requirements, this target is no longer up-to-date. Today, different topics of interest are of interest, especially differentiation between friend and foe and identification of humans. There is no direct way to differentiate between friend and foe in asymmetric scenarios, but one clue can be that someone is carrying a weapon. This clue can be transformed in the observer tasks detection: a person is carrying or is not carrying an object, recognition: the object is a long / medium / short range weapon or civil equipment and identification: the object can be named (e. g. AK-47, M-4, G36, RPG7, Axe, Shovel etc.). These tasks can be assessed experimentally and from the results of such an assessment, a standard target for handheld objects may be derived. For a first assessment, a human carrying 13 different handheld objects in front of his chest was recorded at four different ranges with an IR-dual-band camera. From the recorded data, a perception experiment was prepared. It was conducted with 17 observers in a 13-alternative forced choice, unlimited observation time arrangement. The results of the test together with Minimum Temperature Difference Perceived measurements of the camera and temperature difference and critical dimension derived from the recorded imagery allowed defining a first standard target according to the above tasks. This standard target consist of 2.5 / 3.5 / 5 DRI line pairs on target, 0.24 m critical size and 1 K temperature difference. The values are preliminary and have to be refined in the future. Necessary are different aspect angles, different carriage and movement.

  13. Opportunity Dips in to 'Berry Bowl'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    Scientists are hunting down the recipe for the 'blueberries' they've discovered on Mars. Taken with the Mars Exploration Rover Opportunity's front hazard-avoidance camera on the 45th martian day, or sol, of the rover's mission (March 10, 2004), this image shows the area dubbed 'Berry Bowl,' where many dark and mysterious spherules or 'blueberries' collected in a depression on the surface of a rock. Opportunity is studying the blueberries for clues to their chemical composition with its suite of scientific instruments. 'Berry Bowl' is located within the rock outcrop that lines the inner edge of the crater where the rover landed.

  14. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  15. Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed; Wu, Chensheng; Rzasa, John; Davis, Christopher C.

    2012-10-01

    Ideally, as planar wave fronts travel through an imaging system, all rays, or vectors pointing in the direction of the propagation of energy are parallel, and thus the wave front is focused to a particular point. If the wave front arrives at an imaging system with energy vectors that point in different directions, each part of the wave front will be focused at a slightly different point on the sensor plane and result in a distorted image. The Hartmann test, which involves the insertion of a series of pinholes between the imaging system and the sensor plane, was developed to sample the wavefront at different locations and measure the distortion angles at different points in the wave front. An adaptive optic system, such as a deformable mirror, is then used to correct for these distortions and allow the planar wave front to focus at the point desired on the sensor plane, thereby correcting the distorted image. The apertures of a pinhole array limit the amount of light that reaches the sensor plane. By replacing the pinholes with a microlens array each bundle of rays is focused to brighten the image. Microlens arrays are making their way into newer imaging technologies, such as "light field" or "plenoptic" cameras. In these cameras, the microlens array is used to recover the ray information of the incoming light by using post processing techniques to focus on objects at different depths. The goal of this paper is to demonstrate the use of these plenoptic cameras to recover the distortions in wavefronts. Taking advantage of the microlens array within the plenoptic camera, CODE-V simulations show that its performance can provide more information than a Shack-Hartmann sensor. Using the microlens array to retrieve the ray information and then backstepping through the imaging system provides information about distortions in the arriving wavefront.

  16. Dig Hazard Assessment Using a Stereo Pair of Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Trebi-Ollennu, Ashitey

    2012-01-01

    This software evaluates the terrain within reach of a lander s robotic arm for dig hazards using a stereo pair of cameras that are part of the lander s sensor system. A relative level of risk is calculated for a set of dig sectors. There are two versions of this software; one is designed to run onboard a lander as part of the flight software, and the other runs on a PC under Linux as a ground tool that produces the same results generated on the lander, given stereo images acquired by the lander and downlinked to Earth. Onboard dig hazard assessment is accomplished by executing a workspace panorama command sequence. This sequence acquires a set of stereo pairs of images of the terrain the arm can reach, generates a set of candidate dig sectors, and assesses the dig hazard of each candidate dig sector. The 3D perimeter points of candidate dig sectors are generated using configurable parameters. A 3D reconstruction of the terrain in front of the lander is generated using a set of stereo images acquired from the mast cameras. The 3D reconstruction is used to evaluate the dig goodness of each candidate dig sector based on a set of eight metrics. The eight metrics are: 1. The maximum change in elevation in each sector, 2. The elevation standard deviation in each sector, 3. The forward tilt of each sector with respect to the payload frame, 4. The side tilt of each sector with respect to the payload frame, 5. The maximum size of missing data regions in each sector, 6. The percentage of a sector that has missing data, 7. The roughness of each sector, and 8. Monochrome intensity standard deviation of each sector. Each of the eight metrics forms a goodness image layer where the goodness value of each sector ranges from 0 to 1. Goodness values of 0 and 1 correspond to high and low risk, respectively. For each dig sector, the eight goodness values are merged by selecting the lowest one. Including the merged goodness image layer, there are nine goodness image layers for each stereo pair of mast images.

  17. Calibration Target for Curiosity Arm Camera

    NASA Image and Video Library

    2012-09-10

    This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.

  18. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  19. Dynamics of glacier calving at the ungrounded margin of Helheim Glacier, southeast Greenland

    PubMed Central

    Selmes, Nick; James, Timothy D.; Edwards, Stuart; Martin, Ian; O'Farrell, Timothy; Aspey, Robin; Rutt, Ian; Nettles, Meredith; Baugé, Tim

    2015-01-01

    Abstract During summer 2013 we installed a network of 19 GPS nodes at the ungrounded margin of Helheim Glacier in southeast Greenland together with three cameras to study iceberg calving mechanisms. The network collected data at rates up to every 7 s and was designed to be robust to loss of nodes as the glacier calved. Data collection covered 55 days, and many nodes survived in locations right at the glacier front to the time of iceberg calving. The observations included a number of significant calving events, and as a consequence the glacier retreated ~1.5 km. The data provide real‐time, high‐frequency observations in unprecedented proximity to the calving front. The glacier calved by a process of buoyancy‐force‐induced crevassing in which the ice downglacier of flexion zones rotates upward because it is out of buoyant equilibrium. Calving then occurs back to the flexion zone. This calving process provides a compelling and complete explanation for the data. Tracking of oblique camera images allows identification and characterisation of the flexion zones and their propagation downglacier. Interpretation of the GPS data and camera data in combination allows us to place constraints on the height of the basal cavity that forms beneath the rotating ice downglacier of the flexion zone before calving. The flexion zones are probably formed by the exploitation of basal crevasses, and theoretical considerations suggest that their propagation is strongly enhanced when the glacier base is deeper than buoyant equilibrium. Thus, this calving mechanism is likely to dominate whenever such geometry occurs and is of increasing importance in Greenland. PMID:27570721

  20. Spirit Begins Drive Around Home Plate

    NASA Technical Reports Server (NTRS)

    2009-01-01

    The hazard avoidance camera on the front of NASA's Mars Exploration Rover Spirit took this image after a drive by Spirit on the 1,829th Martian day, or sol, of Spirit's mission on the surface of Mars (Feb. 24, 2009).

    On Sol 1829, Spirit drove 6.29 meters (21 feet) northwestward, away from the northern edge of the low plateau called 'Home Plate.' The track dug by the dragged right-front wheel as the rover drove backward is visible in this image, receding toward the southeast. Rock layers of the northern slope of Home Plate are visible in the upper right portion of the image.

    In sols prior to 1829, the rover team had been trying to maneuver Spirit to climb onto the northern edge of Home Plate, ready to drive southward across the top of the plateau toward science destinations south of Home Plate. The Sol 1829 drive was the first move of a revised strategy to circle at least partway around Home Plate on the trek toward the sites south of the plateau.

  1. Opportunity Studies Bait in Shark's Cage

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In its 49th sol on Mars, NASA's Opportunity had nearly concluded its scientific examination of the extreme southwestern end of the outcrop in Meridiani Planum. In the 'Shark's Cage' area of the neighborhood called 'Shoemaker's Patio,' featured in this image from the front hazard avoidance camera, Opportunity deployed its arm to study the features called 'Shark's Tooth,' 'Shark Pellets,' and 'Lamination.' 'Shark's Tooth' is a piece of the unusual red rind that appears to fill cracks in the outcrop. This rind may be some kind of chemical alteration of the rocks. 'Shark Pellets' is an area of soil that was under investigation as part of the crater soil survey. 'Lamination' is a target with very thin layers that resemble uniform pages in a book, an indication of how the sediments were deposited. A final experiment in this area will be attempted on sol 51. Opportunity's front left wheel will 'scuff' the rock called 'Carousel.' 'Scuffing' involves scraping the rock with one wheel while holding all the others still. This experiment essentially turns the rover wheels into tools, to try and determine the hardness of the target rock.

  2. Adding polarimetric imaging to depth map using improved light field camera 2.0 structure

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu

    2017-06-01

    Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.

  3. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  4. Reactive composite compositions and mat barriers

    DOEpatents

    Langton, Christine A.; Narasimhan, Rajendran; Karraker, David G.

    2001-01-01

    A hazardous material storage area has a reactive multi-layer composite mat which lines an opening into which a reactive backfill and hazardous material are placed. A water-inhibiting cap may cover the hazardous material storage area. The reactive multi-layer composite mat has a backing onto which is placed an active layer which will neutralize or stabilize hazardous waste and a fronting layer so that the active layer is between the fronting and backing layers. The reactive backfill has a reactive agent which can stabilize or neutralize hazardous material and inhibit the movement of the hazardous material through the hazardous material storage area.

  5. Front-end multiplexing—applied to SQUID multiplexing: Athena X-IFU and QUBIC experiments

    NASA Astrophysics Data System (ADS)

    Prele, D.

    2015-08-01

    As we have seen for digital camera market and a sensor resolution increasing to "megapixels", all the scientific and high-tech imagers (whatever the wave length - from radio to X-ray range) tends also to always increases the pixels number. So the constraints on front-end signals transmission increase too. An almost unavoidable solution to simplify integration of large arrays of pixels is front-end multiplexing. Moreover, "simple" and "efficient" techniques allow integration of read-out multiplexers in the focal plane itself. For instance, CCD (Charge Coupled Device) technology has boost number of pixels in digital camera. Indeed, this is exactly a planar technology which integrates both the sensors and a front-end multiplexed readout. In this context, front-end multiplexing techniques will be discussed for a better understanding of their advantages and their limits. Finally, the cases of astronomical instruments in the millimeter and in the X-ray ranges using SQUID (Superconducting QUantum Interference Device) will be described.

  6. 61. SOUTHEAST FRONT ELEVATION OF BUILDING 372 (HAZARDOUS STORAGE) IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    61. SOUTHEAST FRONT ELEVATION OF BUILDING 372 (HAZARDOUS STORAGE) IN BASE SPARES AREA. - Loring Air Force Base, Weapons Storage Area, Northeastern corner of base at northern end of Maine Road, Limestone, Aroostook County, ME

  7. View of Crew Commander Henry Hartsfield Jr. loading film into IMAX camera

    NASA Image and Video Library

    1984-09-08

    41D-11-004 (8 September 1984 --- View of Crew Commander Henry Hartsfield Jr. loading film into the IMAX camera during the 41-D mission. The camera is floating in front of the middeck lockers. Above it is a sticker of the University of Kansas mascott, the Jayhawk.

  8. Analysis of edge density fluctuation measured by trial KSTAR beam emission spectroscopy systema)

    NASA Astrophysics Data System (ADS)

    Nam, Y. U.; Zoletnik, S.; Lampert, M.; Kovácsik, Á.

    2012-10-01

    A beam emission spectroscopy (BES) system based on direct imaging avalanche photodiode (APD) camera has been designed for Korea Superconducting Tokamak Advanced Research (KSTAR) and a trial system has been constructed and installed for evaluating feasibility of the design. The system contains two cameras, one is an APD camera for BES measurement and another is a fast visible camera for position calibration. Two pneumatically actuated mirrors were positioned at front and rear of lens optics. The front mirror can switch the measurement between edge and core region of plasma and the rear mirror can switch between the APD and the visible camera. All systems worked properly and the measured photon flux was reasonable as expected from the simulation. While the measurement data from the trial system were limited, it revealed some interesting characteristics of KSTAR plasma suggesting future research works with fully installed BES system. The analysis result and the development plan will be presented in this paper.

  9. The stellar and solar tracking system of the Geneva Observatory gondola

    NASA Technical Reports Server (NTRS)

    Huguenin, D.

    1974-01-01

    Sun and star trackers have been added to the latest version of the Geneva Observatory gondola. They perform an image motion compensation with an accuracy of plus or minus 1 minute of arc. The structure is held in the vertical position by gravity; the azimuth is controlled by a torque motor in the suspension bearing using solar or geomagnetic references. The image motion compensation is performed by a flat mirror, located in front of the telescope, controlled by pitch and yaw servo-loops. Offset pointing is possible within the solar disc and in a 3 degree by 3 degree stellar field. A T.V. camera facilitates the star identification and acquisition.

  10. Hungry for Rocks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image from the Mars Exploration Rover Spirit hazard identification camera shows the rover's perspective just before its first post-egress drive on Mars. On Sunday, the 15th martian day, or sol, of Spirit's journey, engineers drove Spirit approximately 3 meters (10 feet) toward its first rock target, a football-sized, mountain-shaped rock called Adirondack (not pictured). In the foreground of this image are 'Sashimi' and 'Sushi' - two rocks that scientists considered investigating first. Ultimately, these rocks were not chosen because their rough and dusty surfaces are ill-suited for grinding.

  11. Ready to Rock and Roll

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image from the Mars Exploration Rover Spirit hazard-identification camera shows the rover's perspective just before its first post-egress drive on Mars. On Sunday, the 15th martian day, or sol, of Spirit's journey, engineers drove Spirit approximately 3 meters (10 feet)toward its first rock target, a football-sized, mountain-shaped rock called Adirondack (not pictured). In the foreground of this image are 'Sashimi' and 'Sushi' - two rocks that scientists considered investigating first. Ultimately, these rocks were not chosen because their rough and dusty surfaces are ill-suited for grinding.

  12. Rover Takes a Sunday Drive

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation, made with images from the Mars Exploration Rover Spirit hazard-identification camera, shows the rover's perspective of its first post-egress drive on Mars Sunday. Engineers drove Spirit approximately 3 meters (10 feet) toward its first rock target, a football-sized, mountain-shaped rock called Adirondack. The drive took approximately 30 minutes to complete, including time stopped to take images. Spirit first made a series of arcing turns totaling approximately 1 meter (3 feet). It then turned in place and made a series of short, straightforward movements totaling approximately 2 meters (6.5 feet).

  13. SU-G-IeP4-09: Method of Human Eye Aberration Measurement Using Plenoptic Camera Over Large Field of View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lv, Yang; Wang, Ruixing; Ma, Haotong

    Purpose: The measurement based on Shack-Hartmann wave-front sensor(WFS), obtaining both the high and low order wave-front aberrations simultaneously and accurately, has been applied in the detection of human eyes aberration in recent years. However, Its application is limited by the small field of view (FOV), slight eye movement leads the optical bacon image exceeds the lenslet array which result in uncertain detection error. To overcome difficulties of precise eye location, the capacity of detecting eye wave-front aberration over FOV much larger than simply a single conjugate Hartmann WFS accurately and simultaneously is demanded. Methods: Plenoptic camera’s lenslet array subdivides themore » aperture light-field in spatial frequency domain, capture the 4-D light-field information. Data recorded by plenoptic cameras can be used to extract the wave-front phases associated to the eyes aberration. The corresponding theoretical model and simulation system is built up in this article to discuss wave-front measurement performance when utilizing plenoptic camera as wave-front sensor. Results: The simulation results indicate that the plenoptic wave-front method can obtain both the high and low order eyes wave-front aberration with the same accuracy as conventional system in single visual angle detectionand over FOV much larger than simply a single conjugate Hartmann systems. Meanwhile, simulation results show that detection of eye aberrations wave-front in different visual angle can be achieved effectively and simultaneously by plenoptic method, by both point and extended optical beacon from the eye. Conclusion: Plenoptic wave-front method possesses the feasibility in eye aberrations wave-front detection. With larger FOV, the method can effectively reduce the detection error brought by imprecise eye location and simplify the eye aberrations wave-front detection system comparing with which based on Shack-Hartmann WFS. Unique advantage of the plenoptic method lies in obtaining wave-front in different visual angle simultaneously, which provides an approach in building up 3-D model of eye refractor tomographically. Funded by the key Laboratory of High Power Laser and Physics, CAS Research Project of National University of Defense Technology No. JC13-07-01; National Natural Science Foundation of China No. 61205144.« less

  14. Creating and Using a Camera Obscura

    ERIC Educational Resources Information Center

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  15. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less

  16. Schwarzschild camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The fabrication procedures for the primary and secondary mirrors for a Schwarzschild camera are summarized. The achieved wave front for the telescope was 1/2 wave at .63 microns. Interferograms of the two mirrors as a system are given and the mounting procedures are outlined.

  17. Hazardous materials emergency response mobile robot

    NASA Technical Reports Server (NTRS)

    Stone, Henry W. (Inventor); Lloyd, James (Inventor); Alahuzos, George (Inventor)

    1992-01-01

    A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.

  18. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  19. Imaging Dot Patterns for Measuring Gossamer Space Structures

    NASA Technical Reports Server (NTRS)

    Dorrington, A. A.; Danehy, P. M.; Jones, T. W.; Pappa, R. S.; Connell, J. W.

    2005-01-01

    A paper describes a photogrammetric method for measuring the changing shape of a gossamer (membrane) structure deployed in outer space. Such a structure is typified by a solar sail comprising a transparent polymeric membrane aluminized on its Sun-facing side and coated black on the opposite side. Unlike some prior photogrammetric methods, this method does not require an artificial light source or the attachment of retroreflectors to the gossamer structure. In a basic version of the method, the membrane contains a fluorescent dye, and the front and back coats are removed in matching patterns of dots. The dye in the dots absorbs some sunlight and fluoresces at a longer wavelength in all directions, thereby enabling acquisition of high-contrast images from almost any viewing angle. The fluorescent dots are observed by one or more electronic camera(s) on the Sun side, the shade side, or both sides. Filters that pass the fluorescent light and suppress most of the solar spectrum are placed in front of the camera(s) to increase the contrast of the dots against the background. The dot image(s) in the camera(s) are digitized, then processed by use of commercially available photogrammetric software.

  20. Applications of research from the U.S. Geological Survey program, assessment of regional earthquake hazards and risk along the Wasatch Front, Utah

    USGS Publications Warehouse

    Gori, Paula L.

    1993-01-01

    INTERACTIVE WORKSHOPS: ESSENTIAL ELEMENTS OF THE EARTHQUAKE HAZARDS RESEARCH AND REDUCTION PROGRAM IN THE WASATCH FRONT, UTAH: Interactive workshops provided the forum and stimulus necessary to foster collaboration among the participants in the multidisciplinary, 5-yr program of earthquake hazards reduction in the Wasatch Front, Utah. The workshop process validated well-documented social science theories on the importance of interpersonal interaction, including interaction between researchers and users of research to increase the probability that research will be relevant to the user's needs and, therefore, more readily used. REDUCING EARTHQUAKE HAZARDS IN UTAH: THE CRUCIAL CONNECTION BETWEEN RESEARCHERS AND PRACTITIONERS: Complex scientific and engineering studies must be translated for and transferred to nontechnical personnel for use in reducing earthquake hazards in Utah. The three elements needed for effective translation, likelihood of occurrence, location, and severity of potential hazards, and the three elements needed for effective transfer, delivery, assistance, and encouragement, are described and illustrated for Utah. The importance of evaluating and revising earthquake hazard reduction programs and their components is emphasized. More than 30 evaluations of various natural hazard reduction programs and techniques are introduced. This report was prepared for research managers, funding sources, and evaluators of the Utah earthquake hazard reduction program who are concerned about effectiveness. An overview of the Utah program is provided for those researchers, engineers, planners, and decisionmakers, both public and private, who are committed to reducing human casualties, property damage, and interruptions of socioeconomic systems. PUBLIC PERCEPTIONS OF THE IMPLEMENTATION OF EARTHQUAKE MITIGATION POLICIES ALONG THE WASATCH FRONT IN UTAH: The earthquake hazard potential along the Wasatch Front in Utah has been well defined by a number of scientific and engineering studies. Translated earthquake hazard maps have also been developed to identify areas that are particularly vulnerable to various causes of damage such as ground shaking, surface rupturing, and liquefaction. The implementation of earthquake hazard reduction plans are now under way in various communities in Utah. The results of a survey presented in this paper indicate that technical public officials (planners and building officials) have an understanding of the earthquake hazards and how to mitigate the risks. Although the survey shows that the general public has a slightly lower concern about the potential for economic losses, they recognize the potential problems and can support a number of earthquake mitigation measures. The study suggests that many community groups along the Wasatch Front, including volunteer groups, business groups, and elected and appointed officials, are ready for action-oriented educational programs. These programs could lead to a significant reduction in the risks associated with earthquake hazards. A DATA BASE DESIGNED FOR URBAN SEISMIC HAZARDS STUDIES: A computerized data base has been designed for use in urban seismic hazards studies conducted by the U.S. Geological Survey. The design includes file structures for 16 linked data sets, which contain geological, geophysical, and seismological data used in preparing relative ground response maps of large urban areas. The data base is organized along relational data base principles. A prototype urban hazards data base has been created for evaluation in two urban areas currently under investigation: the Wasatch Front region of Utah and the Puget Sound area of Washington. The initial implementation of the urban hazards data base was accomplished on a microcomputer using dBASE III Plus software and transferred to minicomputers and a work station. A MAPPING OF GROUND-SHAKING INTENSITIES FOR SALT LAKE COUNTY, UTAH: This paper documents the development of maps showing a

  1. Failure Waves in Cylindrical Glass Bars

    NASA Astrophysics Data System (ADS)

    Cazamias, James U.; Bless, Stephan J.; Marder, Michael P.

    1997-07-01

    Failure waves, a propagating front separating virgin and comminuted material, have been receiving a fair amount of attention the last couple of years. While most scientists have been looking at failure waves in plate impact geometries, we have conducted a series of experiments on Pyrex bars. In this paper, we present two types of photographic data from a series of tests. A streak camera was used to determine velocities of the failure front as a function of impact stress. A polaroid camera and a flash lamp provide detailed pictures of the actual event. Attempts were made to observe failure waves in amorphous quartz and acrylic.

  2. Combined approach to the Hubble Space Telescope wave-front distortion analysis

    NASA Astrophysics Data System (ADS)

    Roddier, Claude; Roddier, Francois

    1993-06-01

    Stellar images taken by the HST at various focus positions have been analyzed to estimate wave-front distortion. Rather than using a single algorithm, we found that better results were obtained by combining the advantages of various algorithms. For the planetary camera, the most accurate algorithms consistently gave a spherical aberration of -0.290-micron rms with a maximum deviation of 0.005 micron. Evidence was found that the spherical aberration is essentially produced by the primary mirror. The illumination in the telescope pupil plane was reconstructed and evidence was found for a slight camera misalignment.

  3. Surveillance Using Multiple Unmanned Aerial Vehicles

    DTIC Science & Technology

    2009-03-01

    BATCAM wingspan was 21” vs Jodeh’s 9.1 ft, the BATCAM’s propulsion was electric vs. Jodeh’s gas engine, cameras were body fixed vs. gimballed, and...3.1: BATCAM Camera FOV Angles Angle Front Camera Side Camera Depression Angle 49◦ 39◦ horizontal FOV 48◦ 48◦ vertical FOV 40◦ 40◦ by a quiet electric ...motor. The batteries can be recharged with a car cigarette lighter in less than an hour. Assembly of the wing airframe takes less than a minute, and

  4. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  5. Spirit Ascent Movie, Rover's-Eye View

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A movie assembled from frames taken by the rear hazard-identification camera on NASA's Mars Exploration Rover Spirit shows the last few days of the rover's ascent to the crest of 'Husband Hill' inside Mars' Gusev Crater. The rover was going in reverse. Rover planners often drive Spirit backwards to keep wheel lubrication well distributed. The images in this clip span a timeframe from Spirit's 573rd martian day, or sol (Aug, 13, 2005) to sol 582 (Aug. 22, 2005), the day after the rover reached the crest. During that period, Spirit drove 136 meters (446 feet),

  6. Pancam Imaging of the Mars Exploration Rover Landing Sites in Gusev Crater and Meridiani Planum

    NASA Technical Reports Server (NTRS)

    Bell, J. F., III; Squyres, S. W.; Arvidson, R. E.; Arneson, H. M.; Bass, D.; Cabrol, N.; Calvin, W.; Farmer, J.; Farrand, W. H.

    2004-01-01

    The Mars Exploration Rovers carry four Panoramic Camera (Pancam) instruments (two per rover) that have obtained high resolution multispectral and stereoscopic images for studies of the geology, mineralogy, and surface and atmospheric physical properties at both rover landing sites. The Pancams are also providing significant mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach imaging products.

  7. Spirit Leaves Telling Tracks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Scientists have found clues about the nature of martian soil through analyzing wheel marks from the Mars Exploration Rover Spirit in this image. The image was taken by Spirit's rear hazard-identification camera just after the rover drove approximately 1 meter (3 feet) northwest off the Columbia Memorial Station (lander platform) early Thursday morning. That the wheel tracks are shallow indicates the soil has plenty of strength to support the moving rover. The well-defined track characteristics suggest the presence of very fine particles in the martian soil (along with larger particles). Scientists also think the soil may have some cohesive properties.

  8. 4. CONSTRUCTION PROGRESS VIEW OF EQUIPMENT IN FRONT PART OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. CONSTRUCTION PROGRESS VIEW OF EQUIPMENT IN FRONT PART OF CONTROL BUNKER (TRANSFORMER, HYDRAULIC TANK, PUMP, MOTOR). SHOWS UNLINED CORRUGATED METAL WALL. CAMERA FACING EAST. INEL PHOTO NUMBER 65-5433, TAKEN OCTOBER 20, 1965. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  9. Commander Brand shaves in front of forward middeck lockers

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Commander Brand, wearing shorts, shaves in front of forward middeck lockers using personal hygiene mirror assembly (assy). Open modular locker single tray assy, Field Sequential (FS) crew cabin camera, communications kit assy mini headset (HDST) and HDST interface unit (HIU), personal hygiene kit, and meal tray assemblies appear in view.

  10. Krikalev in front of flight deck windows

    NASA Image and Video Library

    2001-03-12

    STS102-E-5139 (12 March 2001) --- Cosmonaut Sergei K. Krikalev, now a member of the STS-102 crew, prepares to use a camera on Discovery's flight deck. Krikalev, representing Rosaviakosmos, had been onboard the International Space Station (ISS) since early November 2000. The photograph was taken with a digital still camera.

  11. Communities, Cameras, and Conservation

    ERIC Educational Resources Information Center

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  12. Hazardous materials emergency response mobile robot

    NASA Technical Reports Server (NTRS)

    Stone, Henry W. (Inventor); Lloyd, James W. (Inventor); Alahuzos, George A. (Inventor)

    1995-01-01

    A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm is presented. This robot inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.

  13. Micro-satellite constellations for monitoring cryospheric processes and related natural hazards

    NASA Astrophysics Data System (ADS)

    Kaeaeb, A.; Altena, B.; Mascaro, J.

    2016-12-01

    Currently, several micro-satellite constellations for earth-observation are planned or under build-up. Here, we assess the potential of the well-advanced Planet satellite constellation for investigating cryospheric processes. In its final stage, the Planet constellation will consist of 150 free-flying micro-satellites in near-polar and ISS orbits. The instruments carry RGB+NIR frame cameras that image the Earth surface in nadir direction with resolutions of 3-5 m, covering 20 x 13 km per image. In its final set-up, the constellation will be able to image the (almost) entire land surface at least once per day, under the limitation of cloud cover. Here, we explore new possibilities for insight into cryospheric processes that this very high repeat cycle combined with high image resolution offer. Based on repeat Planet imagery we derive repeat glacier velocity fields for example glaciers in the northern and southern hemispheres. We find it especially useful to monitor the ice velocities near calving fronts and simultaneously detect changes of the front, pointing to calving events. We also explore deformation fields over creeping mountain permafrost, so-called rockglaciers. As a second, very promising cryospheric application we suggest monitoring of glacier and permafrost related natural hazards. In cases such as temporary lakes, lake outbursts, landslides, rock avalanches, visual information over remote areas and at high frequencies are crucial for hazard assessment, early warning or disaster management. Based on several examples, we demonstrate that massive micro-satellite constellations such Planet's are exactly able to provide this type of information. As a third promising example, we show how such high-repeat optical satellite data are useful to monitor river ice and related jams and flooding. At certain latitudes, the repeat frequency of the data is even high enough to track river ice floes and thus water velocities.

  14. Applying face identification to detecting hijacking of airplane

    NASA Astrophysics Data System (ADS)

    Luo, Xuanwen; Cheng, Qiang

    2004-09-01

    That terrorists hijacked the airplanes and crashed the World Trade Center is disaster to civilization. To avoid the happening of hijack is critical to homeland security. To report the hijacking in time, limit the terrorist to operate the plane if happened and land the plane to the nearest airport could be an efficient way to avoid the misery. Image processing technique in human face recognition or identification could be used for this task. Before the plane take off, the face images of pilots are input into a face identification system installed in the airplane. The camera in front of pilot seat keeps taking the pilot face image during the flight and comparing it with pre-input pilot face images. If a different face is detected, a warning signal is sent to ground automatically. At the same time, the automatic cruise system is started or the plane is controlled by the ground. The terrorists will have no control over the plane. The plane will be landed to a nearest or appropriate airport under the control of the ground or cruise system. This technique could also be used in automobile industry as an image key to avoid car stealth.

  15. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, R.F.

    1983-10-18

    An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.

  16. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    NASA Astrophysics Data System (ADS)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  17. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, Robert F.

    1987-01-01

    An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.

  18. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, R.F.

    1987-03-10

    An apparatus is disclosed for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously. 3 figs.

  19. 1. NORTHWEST FRONT AND SOUTHWEST SIDE, SHOWING LOCATION OF BUILDING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. NORTHWEST FRONT AND SOUTHWEST SIDE, SHOWING LOCATION OF BUILDING 0520 WEST OF FIRING CONTOL BLOCK HOUSE (BLDG. 0545), BETWEEN SLED TRACK AND CAMERA ACCESS ROAD. - Edwards Air Force Base, South Base Sled Track, Observation Block House, Station "O" area, east end of Sled Track, Lancaster, Los Angeles County, CA

  20. Single lens 3D-camera with extended depth-of-field

    NASA Astrophysics Data System (ADS)

    Perwaß, Christian; Wietzke, Lennart

    2012-03-01

    Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

  1. Savoring Neopolitan

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] Figure 1

    This image from the Mars Exploration Rover Opportunity's front hazard-avoidance camera shows the rover at its Sol 53 (March 17, 2004) location within the 'Eagle Crater' landing site. Dubbed 'Neopolitan,' this location has three different soil patches: a very light unit, a dark unit, and an airbag bounce mark. Scientists are imaging each of these units as part of a crater soil survey. They hope to better understand the origin of the soils they see in the crater and the relationship of the soils to the rocks in Opportunity ledge. This image was taken on sol 52 of Opportunity's journey (March 16, 2004).

    The Ice Cream Trio In Figure 1 above, the light soil unit, seen on the left, is a microscopic imager target dubbed 'Vanilla.' The dark soil unit on the right is a target dubbed 'Cookies 'n' Cream.

  2. Image quality assessment for selfies with and without super resolution

    NASA Astrophysics Data System (ADS)

    Kubota, Aya; Gohshi, Seiichi

    2018-04-01

    With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.

  3. Usage of cornea and sclera back reflected images captured in security cameras for forensic and card games applications

    NASA Astrophysics Data System (ADS)

    Zalevsky, Zeev; Ilovitsh, Asaf; Beiderman, Yevgeny

    2013-10-01

    We present an approach allowing seeing objects that are hidden and that are not positioned in direct line of sight with security inspection cameras. The approach is based on inspecting the back reflections obtained from the cornea and the sclera of the eyes of people attending the inspected scene and which are positioned in front of the hidden objects we aim to image after performing proper calibration with point light source (e.g. a LED). The scene can be a forensic scene or for instance a casino in which the application is to see the cards of poker players seating in front of you.

  4. Validation of Viewing Reports: Exploration of a Photographic Method.

    ERIC Educational Resources Information Center

    Fletcher, James E.; Chen, Charles Chao-Ping

    A time lapse camera loaded with Super 8 film was employed to photographically record the area in front of a conventional television receiver in selected homes. The camera took one picture each minute for three days, including in the same frame the face of the television receiver. Family members kept a conventional viewing diary of their viewing…

  5. 41 CFR 102-34.100 - Where is motor vehicle identification displayed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Government motor vehicles may display motor vehicle identification on a decal in the rear window, or centered on both front doors if the vehicle is without a rear window, or where identification on the rear window would not be easily seen. (b) For trailers, on both sides of the front quarter of the trailer in a...

  6. Spacecraft hazard avoidance utilizing structured light

    NASA Technical Reports Server (NTRS)

    Liebe, Carl Christian; Padgett, Curtis; Chapsky, Jacob; Wilson, Daniel; Brown, Kenneth; Jerebets, Sergei; Goldberg, Hannah; Schroeder, Jeffrey

    2006-01-01

    At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation.

  7. 40 CFR 63.487 - Batch front-end process vents-reference control technology.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process...

  8. 40 CFR 63.491 - Batch front-end process vents-recordkeeping requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.491 Batch front-end process vents—recordkeeping requirements. (a) Group determination records for...) through (a)(6) of this section for each batch front-end process vent subject to the group determination...

  9. 40 CFR 63.487 - Batch front-end process vents-reference control technology.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process vents...

  10. 40 CFR 63.491 - Batch front-end process vents-recordkeeping requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.491 Batch front-end process vents—recordkeeping requirements. (a) Group determination records for...) through (a)(6) of this section for each batch front-end process vent subject to the group determination...

  11. 40 CFR 63.487 - Batch front-end process vents-reference control technology.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process...

  12. 40 CFR 63.487 - Batch front-end process vents-reference control technology.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-reference control technology. 63.487 Section 63.487 Protection of Environment ENVIRONMENTAL PROTECTION... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.487 Batch front-end process vents—reference control technology. (a) Batch front-end process...

  13. 40 CFR 63.491 - Batch front-end process vents-recordkeeping requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.491 Batch front-end process vents—recordkeeping requirements. (a) Group determination records for...) through (a)(6) of this section for each batch front-end process vent subject to the group determination...

  14. 40 CFR 63.491 - Batch front-end process vents-recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CATEGORIES National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.491 Batch front-end process vents—recordkeeping requirements. (a) Group determination records for...) through (a)(6) of this section for each batch front-end process vent subject to the group determination...

  15. Forensics for flatbed scanners

    NASA Astrophysics Data System (ADS)

    Gloe, Thomas; Franz, Elke; Winkler, Antje

    2007-02-01

    Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.

  16. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhanam, A; Min, Y; Beron, P

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less

  17. Polarized fluorescence for skin cancer diagnostic with a multi-aperture camera

    NASA Astrophysics Data System (ADS)

    Kandimalla, Haripriya; Ramella-Roman, Jessica C.

    2008-02-01

    Polarized fluorescence has shown some promising results in assessment of skin cancer margins. Researchers have used tetracycline and cross polarization imaging for nonmelanoma skin cancer demarcation as well as investigating endogenous skin polarized fluorescence. In this paper we present a new instrument for polarized fluorescence imaging, able to calculate the full fluorescence Stokes vector in one snapshot. The core of our system is a multi-aperture camera constructed with a two by two lenslet array. Three of the lenses have polarizing elements in front of them, oriented at 0°, + 45°and 90° with respect to light source polarization. A flash lamp combined with a polarizer parallel to the source-camera-sample plane and a UV filter is used as an excitation source. A blue filter in front of the camera system is used to collect only the fluorescent emission of interest and filter out the incident light. In-vitro tests of endogenous and exogenous polarized fluorescence on collagen rich material like bovine tendon were performed and Stokes vector of polarized fluorescence calculated. The system has the advantage of eliminating moving artifacts with the collection of different polarization states and stoke vector in a single snap shot.

  18. Attempt of Serendipitous Science During the Mojave Volatile Prospector Field Expedition

    NASA Technical Reports Server (NTRS)

    Roush, T. L.; Colaprete, A.; Heldmann, J.; Lim, D. S. S.; Cook, A.; Elphic, R.; Deans, M.; Fluckiger, L.; Fritzler, E.; Hunt, David

    2015-01-01

    On 23 October a partial solar eclipse occurred across parts of the southwest United States between approximately 21:09 and 23:40 (UT), with maximum obscuration, 36%, occurring at 22:29 (UT). During 21-26 October 2014 the Mojave Volatile Prospector (MVP) field expedition deployed and operated the NASA Ames Krex2 rover in the Mojave desert west of Baker, California (Fig. 1, bottom). The MVP field expedition primary goal was to characterize the surface and sub-surface soil moisture properties within desert alluvial fans, and as a secondary goal to provide mission operations simulations of the Resource Prospector (RP) mission to a Lunar pole. The partial solar eclipse provided an opportunity during MVP operations to address serendipitous science. Science instruments on Krex2 included a neutron spectrometer, a near-infrared spectrometer with associated imaging camera, and an independent camera coupled with software to characterize the surface textures of the areas encountered. All of these devices are focused upon the surface and as a result are downward looking. In addition to these science instruments, two hazard cameras are mounted on Krex2. The chief device used to monitor the partial solar eclipse was the engineering development unit of the Near-Infrared Volatile Spectrometer System (NIRVSS) near-infrared spectrometer. This device uses two separate fiber optic fed Hadamard transform spectrometers. The short-wave and long-wave spectrometers measure the 1600-2400 and 2300-3400 nm wavelength regions with resolutions of 10 and 13 nm, respectively. Data are obtained approximately every 8 seconds. The NIRVSS stares in the opposite direction as the front Krex2.

  19. Optical observation of shock waves and cavitation bubbles in high intensity laser-induced shock processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marti-Lopez, L.; Ocana, R.; Porro, J. A.

    2009-07-01

    We report an experimental study of the temporal and spatial dynamics of shock waves, cavitation bubbles, and sound waves generated in water during laser shock processing by single Nd:YAG laser pulses of nanosecond duration. A fast ICCD camera (2 ns gate time) was employed to record false schlieren photographs, schlieren photographs, and Mach-Zehnder interferograms of the zone surrounding the laser spot site on the target, an aluminum alloy sample. We recorded hemispherical shock fronts, cylindrical shock fronts, plane shock fronts, cavitation bubbles, and phase disturbance tracks.

  20. A Framework for People Re-Identification in Multi-Camera Surveillance Systems

    ERIC Educational Resources Information Center

    Ammar, Sirine; Zaghden, Nizar; Neji, Mahmoud

    2017-01-01

    People re-identification has been a very active research topic recently in computer vision. It is an important application in surveillance system with disjoint cameras. This paper is focused on the implementation of a human re-identification system. First the face of detected people is divided into three parts and some soft-biometric traits are…

  1. Spectral imaging of chemical compounds using multivariate optically enhanced filters integrated with InGaAs VGA cameras

    NASA Astrophysics Data System (ADS)

    Priore, Ryan J.; Jacksen, Niels

    2016-05-01

    Infrared hyperspectral imagers (HSI) have been fielded for the detection of hazardous chemical and biological compounds, tag detection (friend versus foe detection) and other defense critical sensing missions over the last two decades. Low Size/Weight/Power/Cost (SWaPc) methods of identification of chemical compounds spectroscopy has been a long term goal for hand held applications. We describe a new HSI concept for low cost / high performance InGaAs SWIR camera chemical identification for military, security, industrial and commercial end user applications. Multivariate Optical Elements (MOEs) are thin-film devices that encode a broadband, spectroscopic pattern allowing a simple broadband detector to generate a highly sensitive and specific detection for a target analyte. MOEs can be matched 1:1 to a discrete analyte or class prediction. Additionally, MOE filter sets are capable of sensing an orthogonal projection of the original sparse spectroscopic space enabling a small set of MOEs to discriminate a multitude of target analytes. This paper identifies algorithms and broadband optical filter designs that have been demonstrated to identify chemical compounds using high performance InGaAs VGA detectors. It shows how some of the initial models have been reduced to simple spectral designs and tested to produce positive identification of such chemicals. We also are developing pixilated MOE compressed detection sensors for the detection of a multitude of chemical targets in challenging backgrounds/environments for both commercial and defense/security applications. This MOE based, real-time HSI sensor will exhibit superior sensitivity and specificity as compared to currently fielded HSI systems.

  2. 16. ARAII Administration building ARA613. South (front) and east sides. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. ARA-II Administration building ARA-613. South (front) and east sides. Camera facing northwest. Sign at left corner of building says, "Fuels and materials division, materials joining research and development laboratory." Part of south wall already has been demolished. Sign on roof railing says, "Danger--Abestos." Ineel photo no. 2-3. - Idaho National Engineering Laboratory, Army Reactors Experimental Area, Scoville, Butte County, ID

  3. Slow Progress in Dune (Left Rear Wheel)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The left rear wheel of NASA's Mars Exploration Rover Opportunity makes slow but steady progress through soft dune material in this movie clip of frames taken by the rover's rear hazard identification camera over a period of several days. The sequence starts on Opportunity's 460th martian day, or sol (May 10, 2005) and ends 11 days later. In eight drives during that period, Opportunity advanced a total of 26 centimeters (10 inches) while spinning its wheels enough to have driven 46 meters (151 feet) if there were no slippage. The motion appears to speed up near the end of the clip, but that is an artifact of individual frames being taken less frequently.

  4. Flame front propagation in a channel with porous walls

    NASA Astrophysics Data System (ADS)

    Golovastov, S. V.; Bivol, G. Yu

    2016-11-01

    Propagation of the detonation front in hydrogen-air mixture was investigated in rectangular cross-section channels with sound-absorbing boundaries. The front of luminescence was detected in a channel with acoustically absorbing walls as opposed to a channel with solid walls. Flame dynamics was recorded using a high-speed camera. The flame was observed to have a V-shaped profile in the acoustically absorbing section. The possible reason for the formation of the V-shaped flame front is friction under the surface due to open pores. In these shear flows, the kinetic energy of the flow on the surface can be easily converted into heat. A relatively small disturbance may eventually lead to significant local stretching of the flame front surface. Trajectories of the flame front along the axis and the boundary are presented for solid and porous surfaces.

  5. 40 CFR 63.486 - Batch front-end process vent provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... National Emission Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.486... paragraph (b) of this section, owners and operators of new and existing affected sources with batch front...

  6. Opportunity Rover Views Ground Texture 'Perseverance Valley'

    NASA Image and Video Library

    2018-02-15

    This late-afternoon view from the front Hazard Avoidance Camera on NASA's Mars Exploration Rover Opportunity shows a pattern of rock stripes on the ground, a surprise to scientists on the rover team. Approaching the 5,000th Martian day or sol, of what was planned as a 90-sol mission, Opportunity is still providing new discoveries. This image was taken inside "Perseverance Valley," on the inboard slope of the western rim of Endeavour Crater, on Sol 4958 (Jan. 4, 2018). Both this view and one taken the same sol by the rover's Navigation Camera look downhill toward the northeast from about one-third of the way down the valley, which extends about the length of two football fields from the crest of the rim toward the crater floor. The lighting, with the Sun at a low angle, emphasizes the ground texture, shaped into stripes defined by rock fragments. The stripes are aligned with the downhill direction. The rock to the upper right of the rover's robotic arm is about 2 inches (5 centimeters) wide and about 3 feet (1 meter) from the centerline of the rover's two front wheels. This striped pattern resembles features seen on Earth, including on Hawaii's Mauna Kea, that are formed by cycles of freezing and thawing of ground moistened by melting ice or snow. There, fine-grained fraction of the soil expands as it freezes, and this lifts the rock fragments up and to the sides. If such a process formed this pattern in Perseverance Valley, those conditions might have been present locally during a period within the past few million years when Mars' spin axis was at a greater tilt than it is now, and some of the water ice now at the poles was redistributed to lower latitudes. Other hypotheses for how these features formed are also under consideration, including high-velocity slope winds. https://photojournal.jpl.nasa.gov/catalog/PIA22218

  7. Flash LIDAR Emulator for HIL Simulation

    NASA Technical Reports Server (NTRS)

    Brewster, Paul F.

    2010-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project is building a system for detecting hazards and automatically landing controlled vehicles safely anywhere on the Moon. The Flash Light Detection And Ranging (LIDAR) sensor is used to create on-the-fly a 3D map of the unknown terrain for hazard detection. As part of the ALHAT project, a hardware-in-the-loop (HIL) simulation testbed was developed to test the data processing, guidance, and navigation algorithms in real-time to prove their feasibility for flight. Replacing the Flash LIDAR camera with an emulator in the testbed provided a cheaper, safer, more feasible way to test the algorithms in a controlled environment. This emulator must have the same hardware interfaces as the LIDAR camera, have the same performance characteristics, and produce images similar in quality to the camera. This presentation describes the issues involved and the techniques used to create a real-time flash LIDAR emulator to support HIL simulation.

  8. Oblique Wing Research Aircraft on ramp

    NASA Image and Video Library

    1976-08-02

    This 1976 photograph of the Oblique Wing Research Aircraft was taken in front of the NASA Flight Research Center hangar, located at Edwards Air Force Base, California. In the photograph the noseboom, pitot-static probe, and angles-of-attack and sideslip flow vanes(covered-up) are attached to the front of the vehicle. The clear nose dome for the television camera, and the shrouded propellor for the 90 horsepower engine are clearly seen.

  9. Retinal fundus imaging with a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  10. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  11. [Case of positive identification by digital superimposed comparison between photograph of the thoracic vertebrae front and thorax roentgenograph].

    PubMed

    Watanabe, Satoshi; Terazawa, Koichi

    2004-09-01

    We reported an autopsy case in which an antemortem thorax roentgenograph and a postmortem photograph of thoracic vertebrae front were available for digital superimposed comparison of contour of the vertebral column and provided a positive identification by the characteristic osteophyte formation. In the elderly, the thorax roentgenograph is often stored in medical institution. Osteophyte formation of the vertebral column has individual features with the aging and formed characteristic profiles of the vertebral column. Photographing of a cadaver's thoracic vertebrae front after removing of the thoracic and abdominal organ should be carried out to make a material for future comparison examination in personal identification.

  12. An array of virtual Frisch-grid CdZnTe detectors and a front-end application-specific integrated circuit for large-area position-sensitive gamma-ray cameras.

    PubMed

    Bolotnikov, A E; Ackley, K; Camarda, G S; Cherches, C; Cui, Y; De Geronimo, G; Fried, J; Hodges, D; Hossain, A; Lee, W; Mahler, G; Maritato, M; Petryk, M; Roy, U; Salwen, C; Vernon, E; Yang, G; James, R B

    2015-07-01

    We developed a robust and low-cost array of virtual Frisch-grid CdZnTe detectors coupled to a front-end readout application-specific integrated circuit (ASIC) for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6 × 6 × 15 mm(3) detectors grouped into 3 × 3 sub-arrays of 2 × 2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readout electronics. The further enhancement of the arrays' performance and reduction of their cost are possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.

  13. An array of virtual Frisch-grid CdZnTe detectors and a front-end application-specific integrated circuit for large-area position-sensitive gamma-ray cameras

    DOE PAGES

    Bolotnikov, A. E.; Ackley, K.; Camarda, G. S.; ...

    2015-07-28

    We developed a robust and low-cost array of virtual Frisch-grid CdZnTe (CZT) detectors coupled to a front-end readout ASIC for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6x6x15 mm 3 detectors grouped into 3x3 sub-arrays of 2x2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readoutmore » electronics. The further enhancement of the arrays’ performance and reduction of their cost are made possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.« less

  14. Visual EKF-SLAM from Heterogeneous Landmarks †

    PubMed Central

    Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.

    2016-01-01

    Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602

  15. 21 CFR 886.1120 - Opthalmic camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  16. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  17. More About Hazard-Response Robot For Combustible Atmospheres

    NASA Technical Reports Server (NTRS)

    Stone, Henry W.; Ohm, Timothy R.

    1995-01-01

    Report presents additional information about design and capabilities of mobile hazard-response robot called "Hazbot III." Designed to operate safely in combustible and/or toxic atmosphere. Includes cameras and chemical sensors helping human technicians determine location and nature of hazard so human emergency team can decide how to eliminate hazard without approaching themselves.

  18. Autonomous pedestrian localization technique using CMOS camera sensors

    NASA Astrophysics Data System (ADS)

    Chun, Chanwoo

    2014-09-01

    We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.

  19. Concept design of an 80-dual polarization element cryogenic phased array camera for the Arecibo Radio Telescope

    NASA Astrophysics Data System (ADS)

    Cortes-Medellin, German; Parshley, Stephen; Campbell, Donald B.; Warnick, Karl F.; Jeffs, Brian D.; Ganesh, Rajagopalan

    2016-08-01

    This paper presents the current concept design for ALPACA (Advanced L-Band Phased Array Camera for Arecibo) an L-Band cryo-phased array instrument proposed for the 305 m radio telescope of Arecibo. It includes the cryogenically cooled front-end with 160 low noise amplifiers, a RF-over-fiber signal transport and a digital beam former with an instantaneous bandwidth of 312.5 MHz per channel. The camera will digitally form 40 simultaneous beams inside the available field of view of the Arecibo telescope optics, with an expected system temperature goal of 30 K.

  20. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).

  1. RCRA, superfund and EPCRA hotline training module. Introduction to: Hazardous waste identification (40 cfr part 261) updated July 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-07-01

    The module introduces a specific hazardous waste identification process, which involves asking and analyzing a series of questions about any waste being evaluated. It analyzes in detail the Resource Conservation and Recovery Act (RCRA) definition of `hazardous waste.` It explains concepts that are essential to identifying a RCRA hazardous waste: hazardous waste listing, hazardous waste characteristics, the `mixture` and `derived-from` rules, the `contained-in` policy, and the hazardous waste identification rules (HWIR).

  2. 30 CFR 46.5 - New miner training.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and loose or... aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition training specific...

  3. 30 CFR 46.5 - New miner training.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and loose or... aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition training specific...

  4. Turbulent transport model of wind shear in thunderstorm gust fronts and warm fronts

    NASA Technical Reports Server (NTRS)

    Lewellen, W. S.; Teske, M. E.; Segur, H. C. O.

    1978-01-01

    A model of turbulent flow in the atmospheric boundary layer was used to simulate the low-level wind and turbulence profiles associated with both local thunderstorm gust fronts and synoptic-scale warm fronts. Dimensional analyses of both type fronts provided the physical scaling necessary to permit normalized simulations to represent fronts for any temperature jump. The sensitivity of the thunderstorm gust front to five different dimensionless parameters as well as a change from axisymmetric to planar geometry was examined. The sensitivity of the warm front to variations in the Rossby number was examined. Results of the simulations are discussed in terms of the conditions which lead to wind shears which are likely to be most hazardous for aircraft operations.

  5. Martian Eclipses: Deimos and Phobos

    NASA Image and Video Library

    2004-03-08

    The panoramic camera on NASA Opportunity combines the first photographs of solar eclipses by Mars two moons, Deimos and Phobos. Deimos appears as a speck in front of the Sun and Phobos grazes its edge.

  6. Comparison of Unmanned Aerial Vehicle Technology Versus Standard Practice in Identification of Hazards at a Mass Casualty Incident Scenario by Primary Care Paramedic Students.

    PubMed

    Jain, Trevor; Sibley, Aaron; Stryhn, Henrik; Hubloue, Ives

    2018-01-31

    Introduction The proliferation of unmanned aerial vehicles (UAV) has the potential to change the situational awareness of incident commanders allowing greater scene safety. The aim of this study was to compare UAV technology to standard practice (SP) in hazard identification during a simulated multi-vehicle motor collision (MVC) in terms of time to identification, accuracy and the order of hazard identification. A prospective observational cohort study was conducted with 21 students randomized into UAV or SP group, based on a MVC with 7 hazards. The UAV group remained at the UAV ground station while the SP group approached the scene. After identifying hazards the time and order was recorded. The mean time (SD, range) to identify the hazards were 3 minutes 41 seconds (1 minute 37 seconds, 1 minute 48 seconds-6 minutes 51 seconds) and 2 minutes 43 seconds (55 seconds, 1 minute 43 seconds-4 minutes 38 seconds) in UAV and SP groups corresponding to a mean difference of 58 seconds (P=0.11). A non-parametric permutation test showed a significant (P=0.04) difference in identification order. Both groups had 100% accuracy in hazard identification with no statistical difference in time for hazard identification. A difference was found in the identification order of hazards. (Disaster Med Public Health Preparedness. 2018;page 1 of 4).

  7. Front-Line Resilience Perspectives: The Electric Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finster, M.; Phillips, J.; Wallace, K.

    2016-11-01

    This report seeks to summarize how states and local utility companies are approaching all-hazards resilience in planning, construction, operations, and maintenance of the electric system, as well as challenges faced when addressing all-hazards resilience.

  8. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results frommore » a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)« less

  9. Cross-Correlation-Based Structural System Identification Using Unmanned Aerial Vehicles

    PubMed Central

    Yoon, Hyungchul; Hoskere, Vedhus; Park, Jong-Woong; Spencer, Billie F.

    2017-01-01

    Computer vision techniques have been employed to characterize dynamic properties of structures, as well as to capture structural motion for system identification purposes. All of these methods leverage image-processing techniques using a stationary camera. This requirement makes finding an effective location for camera installation difficult, because civil infrastructure (i.e., bridges, buildings, etc.) are often difficult to access, being constructed over rivers, roads, or other obstacles. This paper seeks to use video from Unmanned Aerial Vehicles (UAVs) to address this problem. As opposed to the traditional way of using stationary cameras, the use of UAVs brings the issue of the camera itself moving; thus, the displacements of the structure obtained by processing UAV video are relative to the UAV camera. Some efforts have been reported to compensate for the camera motion, but they require certain assumptions that may be difficult to satisfy. This paper proposes a new method for structural system identification using the UAV video directly. Several challenges are addressed, including: (1) estimation of an appropriate scale factor; and (2) compensation for the rolling shutter effect. Experimental validation is carried out to validate the proposed approach. The experimental results demonstrate the efficacy and significant potential of the proposed approach. PMID:28891985

  10. Optical designs for the Mars '03 rover cameras

    NASA Astrophysics Data System (ADS)

    Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.

    2001-12-01

    In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.

  11. RCRA/UST, superfund and EPCRA hotline training module. Introduction to: Hazardous waste identification (40 CFR part 261) updated as of July 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-01

    This module introduces a specific hazardous waste identification process, which involves asking and analyzing a series of questions about any waste being evaluated. Analyzes in detail the Resource Conservation and Recovery Act (RCRA) definition of hazardous waste. It explains the following concepts that are essential to identifying a RCRA hazardous waste: hazardous waste listing, hazardous waste characteristics, the mixture and derived-from rules, the contained-in policy, and the Hazardous Waste Identification Rule (HWIR).

  12. Ripples in The Soil

    NASA Image and Video Library

    2004-02-10

    This is a three-dimensional stereo anaglyph of an image taken by the front navigation camera onboard NASA Mars Exploration Rover Spirit, showing an interesting patch of rippled soil. 3D glasses are necessary to view this image.

  13. High frequency modal identification on noisy high-speed camera data

    NASA Astrophysics Data System (ADS)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2018-01-01

    Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.

  14. 30 CFR 46.6 - Newly hired experienced miner training.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Instruction on the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and... health and safety aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition...

  15. 30 CFR 46.6 - Newly hired experienced miner training.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Instruction on the recognition and avoidance of electrical hazards and other hazards present at the mine, such as traffic patterns and control, mobile equipment (e.g., haul trucks and front-end loaders), and... health and safety aspects of an assigned task in paragraph (b)(4) of this section, if hazard recognition...

  16. Hazardous Materials on Board. Second Edition. Marine Advisory Bulletin No. 43.

    ERIC Educational Resources Information Center

    Hild, Carl

    Intended for boat captains, this illustrated book describes hazards, activities at risk, precautions to take, and procedures for spills. The inside front and back covers provide general rules for treatment of poisonings and emergency phone numbers. Chapter 1 focuses on recognizing the risk and causes of shipboard hazards and describes hazardous…

  17. Monitoring Kilauea Volcano Using Non-Telemetered Time-Lapse Camera Systems

    NASA Astrophysics Data System (ADS)

    Orr, T. R.; Hoblitt, R. P.

    2006-12-01

    Systematic visual observations are an essential component of monitoring volcanic activity. At the Hawaiian Volcano Observatory, the development and deployment of a new generation of high-resolution, non- telemetered, time-lapse camera systems provides periodic visual observations in inaccessible and hazardous environments. The camera systems combine a hand-held digital camera, programmable shutter-release, and other off-the-shelf components in a package that is inexpensive, easy to deploy, and ideal for situations in which the probability of equipment loss due to volcanic activity or theft is substantial. The camera systems have proven invaluable in correlating eruptive activity with deformation and seismic data streams. For example, in late 2005 and much of 2006, Pu`u `O`o, the active vent on Kilauea Volcano`s East Rift Zone, experienced 10--20-hour cycles of inflation and deflation that correlated with increases in seismic energy release. A time-lapse camera looking into a skylight above the main lava tube about 1 km south of the vent showed an increase in lava level---an indicator of increased lava flux---during periods of deflation, and a decrease in lava level during periods of inflation. A second time-lapse camera, with a broad view of the upper part of the active flow field, allowed us to correlate the same cyclic tilt and seismicity with lava breakouts from the tube. The breakouts were accompanied by rapid uplift and subsidence of shatter rings over the tube. The shatter rings---concentric rings of broken rock---rose and subsided by as much as 6 m in less than an hour during periods of varying flux. Time-lapse imagery also permits improved assessment of volcanic hazards, and is invaluable in illustrating the hazards to the public. In collaboration with Hawaii Volcanoes National Park, camera systems have been used to monitor the growth of lava deltas at the entry point of lava into the ocean to determine the potential for catastrophic collapse.

  18. Minimum Requirements for Taxicab Security Cameras.

    PubMed

    Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene

    2014-07-01

    The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.

  19. 76 FR 55846 - Hazardous Waste Management System: Identification and Listing of Hazardous Waste: Carbon Dioxide...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-09

    ... 2050-AG60 Hazardous Waste Management System: Identification and Listing of Hazardous Waste: Carbon... hazardous waste management under the Resource Conservation and Recovery Act (RCRA) to conditionally exclude... and recordkeeping requirements. 40 CFR Part 261 Environmental protection, Hazardous waste, Solid waste...

  20. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  1. The Wide Angle Camera of the ROSETTA Mission

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Verani, S.; Bertini, I.; Lazzarin, M.; Rampazzi, F.; Cremonese, G.; Ragazzoni, R.; Marzari, F.; Angrilli, F.; Bianchini, G. A.; Debei, S.; Dececco, M.; Guizzo, G.; Parzianello, G.; Ramous, P.; Saggin, B.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.; Nicolosi, G.; Pelizzo, M. G.; Tondello, G.; Brunello, P.; Peron, F.

    This paper aims to give a brief description of the Wide Angle Camera (WAC), built by the Centro Servizi e AttivitàSpaziali (CISAS) of the University of Padova for the ESA ROSETTA Mission to comet 46P/Wirtanen and asteroids 4979 Otawara and 140 Siwa. The WAC is part of the OSIRIS imaging system, which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front cover mechanism for the NAC. The flight model of the WAC was delivered in December 2001, and has been already integrated on ROSETTA.

  2. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    PubMed

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  3. Commander Truly on aft flight deck holding communication kit assembly (ASSY)

    NASA Technical Reports Server (NTRS)

    1983-01-01

    On aft flight deck, Commander Truly holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the onorbit station. HASSELBLAD camera is positioned on overhead window W8.

  4. Radar research on thunderstorms and lightning

    NASA Technical Reports Server (NTRS)

    Rust, W. D.; Doviak, R. J.

    1982-01-01

    Applications of Doppler radar to detection of storm hazards are reviewed. Normal radar sweeps reveal data on reflectivity fields of rain drops, ionized lightning paths, and irregularities in humidity and temperature. Doppler radar permits identification of the targets' speed toward or away from the transmitter through interpretation of the shifts in the microwave frequency. Wind velocity fields can be characterized in three dimensions by the use of two radar units, with a Nyquist limit on the highest wind speeds that may be recorded. Comparisons with models numerically derived from Doppler radar data show substantial agreement in storm formation predictions based on information gathered before the storm. Examples are provided of tornado observations with expanded Nyquist limits, gust fronts, turbulence, lightning and storm structures. Obtaining vertical velocities from reflectivity spectra is discussed.

  5. Identification of Potential Hazard using Hazard Identification and Risk Assessment

    NASA Astrophysics Data System (ADS)

    Sari, R. M.; Syahputri, K.; Rizkya, I.; Siregar, I.

    2017-03-01

    This research was conducted in the paper production’s company. These Paper products will be used as a cigarette paper. Along in the production’s process, Company provides the machines and equipment that operated by workers. During the operations, all workers may potentially injured. It known as a potential hazard. Hazard identification and risk assessment is one part of a safety and health program in the stage of risk management. This is very important as part of efforts to prevent occupational injuries and diseases resulting from work. This research is experiencing a problem that is not the identification of potential hazards and risks that would be faced by workers during the running production process. The purpose of this study was to identify the potential hazards by using hazard identification and risk assessment methods. Risk assessment is done using severity criteria and the probability of an accident. According to the research there are 23 potential hazard that occurs with varying severity and probability. Then made the determination Risk Assessment Code (RAC) for each potential hazard, and gained 3 extreme risks, 10 high risks, 6 medium risks and 3 low risks. We have successfully identified potential hazard using RAC.

  6. Method for 3D noncontact measurements of cut trees package area

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.

    2001-02-01

    Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.

  7. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  8. NECTAr: New electronics for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Vorobiov, S.; Bolmont, J.; Corona, P.; Delagnes, E.; Feinstein, F.; Gascón, D.; Glicenstein, J.-F.; Naumann, C. L.; Nayman, P.; Sanuy, A.; Toussenel, F.; Vincent, P.

    2011-05-01

    The European astroparticle physics community aims to design and build the next generation array of Imaging Atmospheric Cherenkov Telescopes (IACTs), that will benefit from the experience of the existing H.E.S.S. and MAGIC detectors, and further expand the very-high energy astronomy domain. In order to gain an order of magnitude in sensitivity in the 10 GeV to >100TeV range, the Cherenkov Telescope Array (CTA) will employ 50-100 mirrors of various sizes equipped with 1000-4000 channels per camera, to be compared with the 6000 channels of the final H.E.S.S. array. A 3-year program, started in 2009, aims to build and test a demonstrator module of a generic CTA camera. We present here the NECTAr design of front-end electronics for the CTA, adapted to the trigger and data acquisition of a large IACTs array, with simple production and maintenance. Cost and camera performances are optimized by maximizing integration of the front-end electronics (amplifiers, fast analog samplers, ADCs) in an ASIC, achieving several GS/s and a few μs readout dead-time. We present preliminary results and extrapolated performances from Monte Carlo simulations.

  9. A temporal PIV study of flame/obstacle generated vortex interactions within a semi-confined combustion chamber

    NASA Astrophysics Data System (ADS)

    Jarvis, S.; Hargrave, G. K.

    2006-01-01

    Experimental data obtained using a new multiple-camera digital particle image velocimetry (PIV) technique are presented for the interaction between a propagating flame and the turbulent recirculating velocity field generated during flame-solid obstacle interaction. The interaction between the gas movement and the obstacle creates turbulence by vortex shedding and local wake recirculations. The presence of turbulence in a flammable gas mixture can wrinkle a flame front, increasing the flame surface area and enhancing the burning rate. To investigate propagating flame/turbulence interaction, a novel multiple-camera digital PIV technique was used to provide high spatial and temporal characterization of the phenomenon for the turbulent flow field in the wake of three sequential obstacles. The technique allowed the quantification of the local flame speed and local flow velocity. Due to the accelerating nature of the explosion flow field, the wake flows develop 'transient' turbulent fields. Multiple-camera PIV provides data to define the spatial and temporal variation of both the velocity field ahead of the propagating flame and the flame front to aid the understanding of flame-vortex interaction. Experimentally obtained values for flame displacement speed and flame stretch are presented for increasing vortex complexity.

  10. Servo-controlled intravital microscope system

    NASA Technical Reports Server (NTRS)

    Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)

    1975-01-01

    A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.

  11. Infrared remote sensing of hazardous vapours: surveillance of public areas during the FIFA Football World Cup 2006

    NASA Astrophysics Data System (ADS)

    Harig, Roland; Matz, Gerhard; Rusch, Peter; Gerhard, Hans-Hennig; Gerhard, Jörn-Hinnrich; Schlabs, Volker

    2007-04-01

    The German ministry of the interior, represented by the civil defence agency BBK, established analytical task forces for the analysis of released chemicals in the case of fires, chemical accidents, terrorist attacks, or war. One of the first assignments of the task forces was the provision of analytical services during the football world cup 2006. One part of the equipment of these emergency response forces is a remote sensing system that allows identification and visualisation of hazardous clouds from long distances, the scanning infrared gas imaging system SIGIS 2. The system is based on an interferometer with a single detector element in combination with a telescope and a synchronised scanning mirror. The system allows 360° surveillance. The system is equipped with a video camera and the results of the analyses of the spectra are displayed by an overlay of a false colour image on the video image. This allows a simple evaluation of the position and the size of a cloud. The system was deployed for surveillance of stadiums and public viewing areas, where large crowds watched the games. Although no intentional or accidental releases of hazardous gases occurred in the stadiums and in the public viewing areas, the systems identified and located various foreign gases in the air.

  12. Optimizing read-out of the NECTAr front-end electronics

    NASA Astrophysics Data System (ADS)

    Vorobiov, S.; Feinstein, F.; Bolmont, J.; Corona, P.; Delagnes, E.; Falvard, A.; Gascón, D.; Glicenstein, J.-F.; Naumann, C. L.; Nayman, P.; Ribo, M.; Sanuy, A.; Tavernet, J.-P.; Toussenel, F.; Vincent, P.

    2012-12-01

    We describe the optimization of the read-out specifications of the NECTAr front-end electronics for the Cherenkov Telescope Array (CTA). The NECTAr project aims at building and testing a demonstrator module of a new front-end electronics design, which takes an advantage of the know-how acquired while building the cameras of the CAT, H.E.S.S.-I and H.E.S.S.-II experiments. The goal of the optimization work is to define the specifications of the digitizing electronics of a CTA camera, in particular integration time window, sampling rate, analog bandwidth using physics simulations. We employed for this work real photomultiplier pulses, sampled at 100 ps with a 600 MHz bandwidth oscilloscope. The individual pulses are drawn randomly at the times at which the photo-electrons, originating from atmospheric showers, arrive at the focal planes of imaging atmospheric Cherenkov telescopes. The timing information is extracted from the existing CTA simulations on the GRID and organized in a local database, together with all the relevant physical parameters (energy, primary particle type, zenith angle, distance from the shower axis, pixel offset from the optical axis, night-sky background level, etc.), and detector configurations (telescope types, camera/mirror configurations, etc.). While investigating the parameter space, an optimal pixel charge integration time window, which minimizes relative error in the measured charge, has been determined. This will allow to gain in sensitivity and to lower the energy threshold of CTA telescopes. We present results of our optimizations and first measurements obtained using the NECTAr demonstrator module.

  13. 48 CFR 252.223-7001 - Hazard warning labels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Hazardous Material Identification and Material Safety Data clause of this contract. (b) The Contractor shall label the item package (unit container) of any hazardous material to be delivered under this contract in... which hazardous material listed in the Hazardous Material Identification and Material Safety Data clause...

  14. 48 CFR 252.223-7001 - Hazard warning labels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Hazardous Material Identification and Material Safety Data clause of this contract. (b) The Contractor shall label the item package (unit container) of any hazardous material to be delivered under this contract in... which hazardous material listed in the Hazardous Material Identification and Material Safety Data clause...

  15. 48 CFR 252.223-7001 - Hazard warning labels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Hazardous Material Identification and Material Safety Data clause of this contract. (b) The Contractor shall label the item package (unit container) of any hazardous material to be delivered under this contract in... which hazardous material listed in the Hazardous Material Identification and Material Safety Data clause...

  16. 48 CFR 252.223-7001 - Hazard warning labels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Hazardous Material Identification and Material Safety Data clause of this contract. (b) The Contractor shall label the item package (unit container) of any hazardous material to be delivered under this contract in... which hazardous material listed in the Hazardous Material Identification and Material Safety Data clause...

  17. 48 CFR 252.223-7001 - Hazard warning labels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Hazardous Material Identification and Material Safety Data clause of this contract. (b) The Contractor shall label the item package (unit container) of any hazardous material to be delivered under this contract in... which hazardous material listed in the Hazardous Material Identification and Material Safety Data clause...

  18. Standard design for National Ignition Facility x-ray streak and framing cameras.

    PubMed

    Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S

    2010-10-01

    The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.

  19. 16 CFR Figure 1 to Part 1512 - Bicycle Front Fork Cantilever Bending Test Rig

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Bicycle Front Fork Cantilever Bending Test Rig 1 Figure 1 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS... Fork Cantilever Bending Test Rig EC03OC91.070 ...

  20. 16 CFR Figure 1 to Part 1512 - Bicycle Front Fork Cantilever Bending Test Rig

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Bicycle Front Fork Cantilever Bending Test Rig 1 Figure 1 to Part 1512 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS... Fork Cantilever Bending Test Rig EC03OC91.070 ...

  1. Video occupant detection and classification

    DOEpatents

    Krumm, John C.

    1999-01-01

    A system for determining when it is not safe to arm a vehicle airbag by storing representations of known situations as observed by a camera at a passenger seat; and comparing a representation of a camera output of the current situation to the stored representations to determine the known situation most closely represented by the current situation. In the preferred embodiment, the stored representations include the presence or absence of a person or infant seat in the front passenger seat of an automobile.

  2. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  3. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  4. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  5. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  6. 21 CFR 892.1620 - Cine or spot fluorographic x-ray camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Cine or spot fluorographic x-ray camera. 892.1620... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1620 Cine or spot fluorographic x-ray camera. (a) Identification. A cine or spot fluorographic x-ray camera is a device intended to photograph...

  7. Rock Stripes Pattern in Mars' 'Perseverance Valley'

    NASA Image and Video Library

    2018-02-15

    Textured rows on the ground in this portion of "Perseverance Valley" are under investigation by NASA's Mars Exploration Rover Opportunity, which used its Navigation Camera (Navcam) to take the component images of this downhill-looking scene. The rover took this image on Jan. 4, 2018, during the 4,958th Martian day, or sol, of its work on Mars, looking downhill from a position about one-third of the way down the valley. Perseverance Valley descends the inboard slope of the western rim of Endeavour Crater. A view on the same sol with the rover's front Hazard Avoidance Camera includes ground even closer to the rover at this site. Opportunity was still working close by as it reached the mission's Sol 5,000 (Feb. 16, 2018). In the portion of the valley seen here, soil and gravel have been shaped into a striped pattern in the foreground and partially bury outcrops visible in the midfield. The long dimensions of the stripes are approximately aligned with the downhill direction. The striped pattern resembles a type of feature on Earth (such as on Hawaii's Mauna Kea) that is caused by repeated cycles of freezing and thawing, though other possible origins are also under consideration for the pattern in Perseverance Valley. The view is spans from north on the left to east-southeast on the right. For scale, the foreground rock clump in the lower right is about 11 inches (28 centimeters) in width. https://photojournal.jpl.nasa.gov/catalog/PIA22217

  8. Wavefront Sensing With Switched Lenses for Defocus Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    In an alternative hardware design for an apparatus used in image-based wavefront sensing, defocus diversity is introduced by means of fixed lenses that are mounted in a filter wheel (see figure) so that they can be alternately switched into a position in front of the focal plane of an electronic camera recording the image formed by the optical system under test. [The terms image-based, wavefront sensing, and defocus diversity are defined in the first of the three immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] Each lens in the filter wheel is designed so that the optical effect of placing it at the assigned position is equivalent to the optical effect of translating the camera a specified defocus distance along the optical axis. Heretofore, defocus diversity has been obtained by translating the imaging camera along the optical axis to various defocus positions. Because data must be taken at multiple, accurately measured defocus positions, it is necessary to mount the camera on a precise translation stage that must be calibrated for each defocus position and/or to use an optical encoder for measurement and feedback control of the defocus positions. Additional latency is introduced into the wavefront sensing process as the camera is translated to the various defocus positions. Moreover, if the optical system under test has a large focal length, the required defocus values are large, making it necessary to use a correspondingly bulky translation stage. By eliminating the need for translation of the camera, the alternative design simplifies and accelerates the wavefront-sensing process. This design is cost-effective in that the filterwheel/lens mechanism can be built from commercial catalog components. After initial calibration of the defocus value of each lens, a selected defocus value is introduced by simply rotating the filter wheel to place the corresponding lens in front of the camera. The rotation of the wheel can be automated by use of a motor drive, and further calibration is not necessary. Because a camera-translation stage is no longer needed, the size of the overall apparatus can be correspondingly reduced.

  9. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  10. Minimum Requirements for Taxicab Security Cameras*

    PubMed Central

    Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene

    2015-01-01

    Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992

  11. Spirit's Course

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1Figure 2

    This digital elevation map shows the topography of the 'Columbia Hills,' just in front of the Mars Exploration Rover Spirit's current position. Rover planners have plotted the safest route for Spirit to climb to the front hill, called 'West Spur.' The black line in the middle of the image represents the rover's traverse path, which starts at 'Hank's Hollow' and ends at the top of 'West Spur.' Scientists are sending Spirit up the hill to investigate the interesting rock outcrops visible in images taken by the rover. Data from the Mars Orbital Camera on the orbiting Mars Global Surveyor were used to create this 3-D map.

    In figure 1, the digital map shows the slopes of the 'Columbia Hills,' just in front of the Mars Exploration Rover Spirit's current position. Colors indicate the slopes of the hills, with red areas being the gentlest and blue the steepest. Rover planners have plotted the safest route for Spirit to climb the front hill, called 'West Spur.' The path is indicated here with a curved black line. Stereo images from the Mars Orbital Camera on the orbiting Mars Global Surveyor were used to create this 3-D map.

    In figure 2, the map shows the north-facing slopes of the 'Columbia Hills,' just in front of the Mars Exploration Rover Spirit's current position. Bright areas indicate surfaces sloping more toward the north than dark areas. To reach the rock outcrop at the top of the hill, engineers will aim to drive the rover around the dark areas, which would yield less solar power. The curved black line in the middle represents the rover's planned traverse path.

  12. Mission Specialist (MS) Lenoir cuts Pilot Overmyer's hair on middeck

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Mission Specialist (MS) Lenoir, using hairbrush and scissors, cuts Pilot Overmyer's hair and trims his sideburns in front of forward middeck lockers. Personal hygiene kit (open), towels, and field sequential (FS) crew cabin camera are attached to lockers.

  13. Polarimetric Thermal Imaging

    DTIC Science & Technology

    2007-03-01

    front of a large area blackbody as background. The viewing angle , defined as the angle between surface normal and camera line of sight, was varied by...and polarization angle were derived from the Stokes parameters. The dependence of these polarization characteristics on viewing angle was investigated

  14. A Piece of New Mexico on Mars

    NASA Image and Video Library

    2012-09-10

    This image taken by the MAHLI camera shows a sample of basaltic rock from a lava flow in New Mexico serves as a calibration target carried on the front of NASA Mars rover Curiosity for the rover Canadian-made APXS instrument.

  15. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  16. Dragging Its Foot

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This animation shows the Mars Exploration Rover Opportunity digging a hole in the ground at Meridiani Planum, Mars. The rover scraped its front right wheel back and forth across the surface six times by rotating its whole body in place. At the end of each sweep, the wheel changed the direction it was spinning to shove excess dirt out of the way. The resulting trench is about 50 centimeters (19.7 inches) long by 20 centimeters (7.9 inches) wide by 9 centimeters (3.5 inches) deep.

    The rover's instrument deployment device, or arm, will begin studying the fresh soil at the bottom of the hole later today for clues to its mineral composition and history. Scientists chose this particular site for trenching because previous data taken by the rover's miniature thermal emission spectrometer indicated that it contains crystalline hematite, a mineral that sometimes forms in the presence of water. The brightness of the newly-exposed soil is thought to be either intrinsic to the soil itself, or a reflection of the Sun.

    This movie is composed of images taken by the rover's hazard-avoidance camera.

  17. Airborne multispectral identification of individual cotton plants using consumer-grade cameras

    USDA-ARS?s Scientific Manuscript database

    Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...

  18. Mechanism controller system for the optical spectroscopic and infrared remote imaging system instrument on board the Rosetta space mission

    NASA Astrophysics Data System (ADS)

    Castro Marín, J. M.; Brown, V. J. G.; López Jiménez, A. C.; Rodríguez Gómez, J.; Rodrigo, R.

    2001-05-01

    The optical, spectroscopic infrared remote imaging system (OSIRIS) is an instrument carried on board the European Space Agency spacecraft Rosetta that will be launched in January 2003 to study in situ the comet Wirtanen. The electronic design of the mechanism controller board (MCB) system of the two OSIRIS optical cameras, the narrow angle camera, and the wide angle camera, is described here. The system is comprised of two boards mounted on an aluminum frame as part of an electronics box that contains the power supply and the digital processor unit of the instrument. The mechanisms controlled by the MCB for each camera are the front door assembly and a filter wheel assembly. The front door assembly for each camera is driven by a four phase, permanent magnet stepper motor. Each filter wheel assembly consists of two, eight filter wheels. Each wheel is driven by a four phase, variable reluctance stepper motor. Each motor, for all the assemblies, also contains a redundant set of four stator phase windings that can be energized separately or in parallel with the main windings. All stepper motors are driven in both directions using the full step unipolar mode of operation. The MCB also performs general housekeeping data acquisition of the OSIRIS instrument, i.e., mechanism position encoders and temperature measurements. The electronic design application used is quite new due to use of a field programmable gate array electronic devices that avoid the use of the now traditional system controlled by microcontrollers and software. Electrical tests of the engineering model have been performed successfully and the system is ready for space qualification after environmental testing. This system may be of interest to institutions involved in future space experiments with similar needs for mechanisms control.

  19. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  20. Using a smartphone as a tool to measure compensatory and anomalous head positions.

    PubMed

    Farah, Michelle de Lima; Santinello, Murillo; Carvalho, Luis Eduardo Morato Rebouças de; Uesugui, Carlos Fumiaki; Barcellos, Ronaldo Boaventura

    2018-01-01

    To describe a new method for measuring anomalous head positions by using a cell phone. The photo rotation feature of the iPhone® PHOTOS application was used. With the patient seated on a chair, a horizontal stripe was fixed on the wall in the background and a sagittal stripe was fixed on the seat. Photographs were obtained in the following views: front view (photographs A and B; with the head tilted over one shoulder) and upper axial view (photographs C and D; viewing the forehead and nose) (A and C are without camera rotation, and B and D are with camera rotation). A blank sheet of paper with two straight lines making a 32-degree angle was also photographed. Thirty examiners were instructed to measure the rotation required to align the reference points with the orthogonal axes. In order to set benchmarks to be compared with the measurements obtained by the examiners, blue lines were digitally added to the front and upper view photographs. In the photograph of the sheet of paper (p=0.380 and a=5%), the observed values did not differ statistically from the known value of 32 degrees. Mean measurements were as follows: front view photograph A, 22.8 ± 2.77; front view B, 21.4 ± 1.61; upper view C, 19.6 ± 2.36; and upper view D, 20.1 ± 2.33 degrees. The mean difference in measurements for the front view photograph A was -1.88 (95% CI -2.88 to -0.88), front view B was -0.37 (95% CI -0.97 to 0.17), upper view C was 1.43 (95% CI 0.55 to 2.24), and upper view D was 1.87 (95% CI 1.02 to 2.77). The method used in this study for measuring anomalous head position is reproducible, with maximum variations for AHPs of 2.88 degrees around the X-axis and 2.77 degrees around the Y-axis.

  1. Methodology for creating dedicated machine and algorithm on sunflower counting

    NASA Astrophysics Data System (ADS)

    Muracciole, Vincent; Plainchault, Patrick; Mannino, Maria-Rosaria; Bertrand, Dominique; Vigouroux, Bertrand

    2007-09-01

    In order to sell grain lots in European countries, seed industries need a government certification. This certification requests purity testing, seed counting in order to quantify specified seed species and other impurities in lots, and germination testing. These analyses are carried out within the framework of international trade according to the methods of the International Seed Testing Association. Presently these different analyses are still achieved manually by skilled operators. Previous works have already shown that seeds can be characterized by around 110 visual features (morphology, colour, texture), and thus have presented several identification algorithms. Until now, most of the works in this domain are computer based. The approach presented in this article is based on the design of dedicated electronic vision machine aimed to identify and sort seeds. This machine is composed of a FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor) and a PC bearing the GUI (Human Machine Interface) of the system. Its operation relies on the stroboscopic image acquisition of a seed falling in front of a camera. A first machine was designed according to this approach, in order to simulate all the vision chain (image acquisition, feature extraction, identification) under the Matlab environment. In order to perform this task into dedicated hardware, all these algorithms were developed without the use of the Matlab toolbox. The objective of this article is to present a design methodology for a special purpose identification algorithm based on distance between groups into dedicated hardware machine for seed counting.

  2. 75 FR 58346 - Hazardous Waste Management System; Identification and Listing of Hazardous Waste

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-24

    ... Chemical Company-Texas Operations (Eastman) to exclude (or delist) certain solid wastes generated by its Longview, Texas, facility from the lists of hazardous wastes. EPA used the Delisting Risk Assessment... Waste Management System; Identification and Listing of Hazardous Waste AGENCY: Environmental Protection...

  3. Observations of seating position of front seat occupants relative to the side of the vehicle.

    PubMed

    Dinas, Arthur; Fildes, Brian N

    2002-01-01

    This study was an on-road observational study of the seating position and limb position of front seat occupants relevant to the side of the car for a representative sample of occupants during straight road driving and turning manoeuvres. A video camera captured over 650 front-on images of passenger car occupants in Metropolitan Melbourne. Results showed a significant numbers of occupants were seated out-of-position while travelling on the road and that a number of these were seated in a manner that may possibly result in injury from the deployment of a side airbag. This was particularly so while turning, a situation common in many side impacts. A substantial number of front seat occupants' arms were exposed to severe injury in the event of a side impact crash. These findings highlight a number of aspects of seating behaviour of driver and front passengers that need to be taken into account when designing side impact airbags.

  4. Late afternoon view of the interior of the eastcentral wall ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Late afternoon view of the interior of the east-central wall section to be removed; camera facing north. Stubby crape myrtle in front of wall. Metal Quonset hut in background. - Beaufort National Cemetery, Wall, 1601 Boundary Street, Beaufort, Beaufort County, SC

  5. 40 CFR 241.3 - Standards and procedures for identification of non-hazardous secondary materials that are solid...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... identification of non-hazardous secondary materials that are solid wastes when used as fuels or ingredients in...) SOLID WASTES SOLID WASTES USED AS FUELS OR INGREDIENTS IN COMBUSTION UNITS Identification of Non-Hazardous Secondary Materials That Are Solid Wastes When Used as Fuels or Ingredients in Combustion Units...

  6. Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification.

    PubMed

    Su, Chi; Yang, Fan; Zhang, Shiliang; Tian, Qi; Davis, Larry Steven; Gao, Wen

    2018-05-01

    We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.

  7. Animal Effects from Soviet Atmospheric Nuclear Tests

    DTIC Science & Technology

    2008-03-01

    given to (1) radiological skin injuries, (2) the hazard levels of inhalation, and (3) peroral penetration of radioactive materials. Classification of...wave front became the basic hazard factor. Thus, during the test of November 22, 1955, in the temporizing region located at a distance of 36 km from...disk were particularly hazardous . The analysis of experimental results testifies to the fact that eyeground bums (chorioretinal injuries) are most

  8. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  9. Non-front-fanged colubroid snakes: a current evidence-based analysis of medical significance.

    PubMed

    Weinstein, Scott A; White, Julian; Keyler, Daniel E; Warrell, David A

    2013-07-01

    Non-front-fanged colubroid snakes (NFFC; formerly and artificially taxonomically assembled as "colubrids") comprise about 70% of extant snake species and include several taxa now known to cause lethal or life threatening envenoming in humans. Although the medical risks of bites by only a handful of species have been documented, a growing number of NFFC are implicated in medically significant bites. The majority of these snakes have oral products (Duvernoy's secretions, or venoms) with unknown biomedical properties and their potential for causing harm in humans is unknown. Increasingly, multiple NFFC species are entering the commercial snake trade posing an uncertain risk. Published case reports describing NFFC bites were assessed for evidence-based value, clinical detail and verified species identification. These data were subjected to meta-analysis and a hazard index was generated for select taxa. Cases on which we consulted or personally treated were included and subjected to the same assessment criteria. Cases involving approximately 120 species met the selection criteria, and a small subset designated Hazard Level 1 (most hazardous), contained 5 species with lethal potential. Recommended management of these cases included antivenom for 3 species, Dispholidus typus, Rhabdophis tiginis, Rhabdophis subminiatus, whereas others in this subset without commercially available antivenoms (Thelotornis spp.) were treated with plasma/erythrocyte replacement therapy and supportive care. Heparin, antifibrinolytics and/or plasmapheresis/exchange transfusion have been used in the management of some Hazard Level 1 envenomings, but evidence-based analysis positively contraindicates the use of any of these interventions. Hazard Level 2/3 species were involved in cases containing mixed quality data that implicated these taxa (e.g. Boiga irregularis, Philodryas olfersii, Malpolon monspessulanus) with bites that caused rare systemic effects. Recommended management may include use of acetylcholinesterase inhibitors (e.g. neostigmine) and wound care on a case-by-case basis. Hazard level 3 species comprised a larger group capable of producing significant local effects only, often associated with a protracted bite (eg Heterodon nasicus, Borikenophis (Alsophis) portoricensis, Platyceps (Coluber) rhodorachis). Management is restricted to wound care. Bites by Hazard level 4 species comprised the majority of surveyed taxa and these showed only minor effects of no clinical importance. This study has produced a comprehensive evidence-based listing of NFFC snakes tabulated against medical significance of bites, together with best-practice management recommendations. This analysis assumes increasing importance, as there is growing exposure to lesser-known NFFC snakes, particularly in captive collections that may uncover further species of significance in the future. Careful and accurate documentation of bites by verified species of NFFC snakes is required to increase the evidence base and establish the best medical management approach for each species. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. STS-26 long duration simulation in JSC Mission Control Center (MCC) Bldg 30

    NASA Technical Reports Server (NTRS)

    1988-01-01

    STS-26 long duration simulation is conducted in JSC Mission Control Center (MCC) Bldg 30 Flight Control Room (FCR). Front row of consoles with Propulsion Engineer (PROP) and Guidance, Navigation, and Control Systems Engineer (GNC) are visible in the foreground. CBS television camera personnel record front visual displays (orbital chart and data) for '48 Hours' program to be broadcast at a later date. The integrated simulation involved communicating with crewmembers stationed in the fixed based (FB) shuttle mission simulator (SMS) located in JSC Mission Simulation and Training Facility Bldg 5.

  11. Microwave interferometry technique for obtaining gas interface velocity measurements in an expansion tube facility

    NASA Technical Reports Server (NTRS)

    Laney, C. C., Jr.

    1974-01-01

    A microwave interferometer technique to determine the front interface velocity of a high enthalpy gas flow, is described. The system is designed to excite a standing wave in an expansion tube, and to measure the shift in this standing wave as it is moved by the test gas front. Data, in the form of a varying sinusoidal signal, is recorded on a high-speed drum camera-oscilloscope combination. Measurements of average and incremental velocities in excess of 6,000 meters per second were made.

  12. Food-intake patterns assessed by using front-of-pack labeling program criteria associated with better diet quality and lower cardiometabolic risk

    USDA-ARS?s Scientific Manuscript database

    Front-of-pack labeling systems may provide additional guidance to that already available to facilitate the identification of foods that improve diet quality. We examined the association between choosing foods that meet criteria of an established front-of-pack labeling system with food-group and nutr...

  13. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  14. Applications of UAV Photogrammetric Surveys to Natural Hazard Detection and Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Trizzino, Rosamaria; Caprioli, Mauro; Mazzone, Francesco; Scarano, Mario

    2017-04-01

    Unmanned Aerial Vehicle (UAV) systems are increasingly seen as an attractive low-cost alternative or supplement to aerial and terrestrial photogrammetry due to their low cost, flexibility, availability and readiness for duty. In addition, UAVs can be operated in hazardous or temporarily inaccessible locations. The combination of photogrammetric aerial and terrestrial recording methods using a mini UAV (also known as "drone") opens a broad range of applications, such as surveillance and monitoring of the environment and infrastructural assets. In particular, these methods and techniques are of paramount interest for the documentation of cultural heritage sites and areas of natural importance, facing threats from natural deterioration and hazards. In order to verify the reliability of these technologies an UAV survey and a LIDAR survey have been carried out along about 1 km of coast in the Salento peninsula, near the towns of San Foca, Torre dell' Orso and SantAndrea ( Lecce, Southern Italy). This area is affected by serious environmental hazards due to the presence of dangerous rocky cliffs named "falesie". The UAV platform was equipped with a photogrammetric measurement system that allowed us to obtain a mobile mapping of the fractured fronts of dangerous rocky cliffs. UAV-images data have been processed using dedicated software (Agisoft Photoscan). The point clouds obtained from both the UAV and LIDAR surveys have been processed using Cloud Compare software, with the aim of testing the UAV results with respect to the LIDAR ones. The analysis were done using the C2C algorithm which provides good results in terms of Euclidian distances, highlighting differences between the 3D models obtained from both the survey techiques. The total error obtained was of centimeter-order that is a very satisfactory result. In the the 2nd study area, the opportunities of obtaining more detailed documentation of cultural goods throughout UAV survey have been investigated. The study area is an ancient Aragonese watchtower of the seventeenth century, located near the Abbey of San Vito in the countryside of Polignano a Mare (in the province of Bari, Southern Italy). The survey has been carried out with an "esacopter" equipped with a CANON EOS 550D. The image processing was carried out with Photogrammetric and Structure from Motion software (Agisoft PhotoScan) and, as a result, a cloud of 524.607 points with a 0.010096 m/pix resolution was obtained starting from 330 nadiral and inclined images. In order to verify the suitability of this technique we carried out also a terrestrial photogrammetric survey using three different photographic media, a reflex camera with integrated GPS, a compact digital camera and a camera of a smartphone. Three data set of image have been obtained and then compared. In conclusion, it is possible to say that the peculiarity of the RPAS photogrammetric survey allowed highlighting some peculiariar features of the tower, such as the presence of a trapdoor and of a chimney at the roof level, not detectable with a terrestrial survey, that could provide essential elements in order to execute restoration works aimed at the recovery of the cultural heritage.

  15. STS-105, Expeditions Two and Three crew portrait in the ISS U.S. Laboratory/Destiny

    NASA Image and Video Library

    2001-08-17

    STS105-E-5326 (17 August 2001) --- The Expedition Three (white shirts), STS-105 (striped shirts), and Expedition Two (red shirts) crews assemble for a press conference in the U.S. Laboratory. The Expedition Three crew members are, from front to back, Frank L. Culbertson, mission commander; and cosmonauts Vladimir N. Dezhurov and Mikhail Tyurin, flight engineers; STS-105 crewmembers are, front row, Patrick G. Forrester and Daniel T. Barry, mission specialists, and back row, Scott J. Horowitz, commander, and Frederick W. (Rick) Sturckow, pilot; Expedition Two crewmembers are, from front to back, cosmonaut Yury V. Usachev, mission commander, and James S. Voss and Susan J. Helms, flight engineers. This image was taken with a digital still camera.

  16. PBF Reactor Building (PER620) under construction. Aerial view with camera ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Reactor Building (PER-620) under construction. Aerial view with camera facing northeast. Steel framework is exposed for west wing and high bay. Concrete block siding on east wing. Railroad crane set up on west side. Note trenches proceeding from front of building. Left trench is for secondary coolant and will lead to Cooling Tower. Shorter trench will contain cables leading to control area. Photographer: Larry Page. Date: March 22, 1967. INEEL negative no. 67-5025 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  17. ePix100 camera: Use and applications at LCLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.

    2016-07-27

    The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopymore » (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.« less

  18. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  19. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  20. Bifocal Stereo for Multipath Person Re-Identification

    NASA Astrophysics Data System (ADS)

    Blott, G.; Heipke, C.

    2017-11-01

    This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.

  1. Dawn First Glimpse of Vesta -- Processed

    NASA Image and Video Library

    2011-05-11

    This image, processed to show the true size of the giant asteroid Vesta, shows Vesta in front of a spectacular background of stars. It was obtained by the framing camera aboard NASA Dawn spacecraft on May 3, 2011, from a distance of about 750,000 miles.

  2. The hazards of hazard identification in environmental epidemiology.

    PubMed

    Saracci, Rodolfo

    2017-08-09

    Hazard identification is a major scientific challenge, notably for environmental epidemiology, and is often surrounded, as the recent case of glyphosate shows, by debate arising in the first place by the inherently problematic nature of many components of the identification process. Particularly relevant in this respect are components less amenable to logical or mathematical formalization and essentially dependent on scientists' judgment. Four such potentially hazardous components that are capable of distorting the correct process of hazard identification are reviewed and discussed from an epidemiologist perspective: (1) lexical mix-up of hazard and risk (2) scientific questions as distinct from testable hypotheses, and implications for the hierarchy of strength of evidence obtainable from different types of study designs (3) assumptions in prior beliefs and model choices and (4) conflicts of interest. Four suggestions are put forward to strengthen a process that remains in several aspects judgmental, but not arbitrary, in nature.

  3. Markerless client-server augmented reality system with natural features

    NASA Astrophysics Data System (ADS)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  4. STS-31 MS Sullivan and Pilot Bolden monitor SE 82-16 Ion Arc on OV-103 middeck

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-31 Mission Specialist (MS) Kathryn D. Sullivan monitors and advises ground controllers of the activity inside the Student Experiment (SE) 82-16, Ion arc - studies of the effects of microgravity and a magnetic field on an electric arc, mounted in front of the middeck lockers aboard Discovery, Orbiter Vehicle (OV) 103. Pilot Charles F. Bolden uses a video camera and an ARRIFLEX motion picture camera to record the activity inside the special chamber. A sign in front of the experiment reads 'SSIP 82-16 Greg's Experiment Happy Graduation from STS-31.' SSIP stands for Shuttle Student Involvement Program. Gregory S. Peterson who developed the experiment (Greg's Experiment) is a student at Utah State University and monitored the experiment's operation from JSC's Mission Control Center (MCC) during the flight. Decals displayed in the background on the orbiter galley represent the Hubble Space Telescope (HST), the United States (U.S.) Naval Reserve, Navy Oceanographers, U.S. Navy, and Univer

  5. Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer

    NASA Astrophysics Data System (ADS)

    Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin

    2017-12-01

    An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.

  6. Opportunity Rolls Free Again (Left Front Wheel)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This animated piece illustrates the recent escape of NASA's Mars Exploration Rover Opportunity from dangerous, loose material on the vast plains leading to the rover's next long-term target, 'Victoria Crater.'

    A series of images of the rover's left front wheel, taken by the front hazard-avoidance camera, make up this brief movie. It chronicles the challenge Opportunity faced to free itself from a ripple dubbed 'Jammerbugt.' The rover's wheels became partially embedded in the ripple at the end of a drive on Opportunity's 833rd Martian day, or sol (May 28, 2006). The images in this clip were taken on sols 836 through 841 (May 31 through June 5, 2006).

    Scientists and engineers who had been elated at the meters of progress the rover had been making in earlier drives were happy for even centimeters of advance per sol as they maneuvered their explorer through the slippery material of Jammerbugt. The wheels reached solid footing on a rock outcrop on the final sol of this sequence.

    The science and engineering teams appropriately chose the ripple's informal from name the name of a bay on the north coast of Denmark. Jammerbugt, or Jammerbugten, loosely translated, means Bay of Lamentation or Bay of Wailing. The shipping route from the North Sea to the Baltic passes Jammerbugt on its way around the northern tip of Jutland. This has always been an important trade route and many ships still pass by the bay. The prevailing wind directions are typically northwest to southwest with the strongest winds and storms tending to blow from the northwest. A northwesterly wind will blow straight into the Jammerbugt, towards shore. Therefore, in the age of sail, many ships sank there during storms. The shore is sandy, but can have strong waves, so running aground was very dangerous even though there are no rocks.

    Fortunately, Opportunity weathered its 'Jammerbugt' and is again on its way toward Victoria Crater.

  7. Opportunity Rolls Free Again (Right Front Wheel)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This animated piece illustrates the recent escape of NASA's Mars Exploration Rover Opportunity from dangerous, loose material on the vast plains leading to the rover's next long-term target, 'Victoria Crater.'

    A series of images of the rover's right front wheel, taken by the front hazard-avoidance camera, make up this brief movie. It chronicles the challenge Opportunity faced to free itself from a ripple dubbed 'Jammerbugt.' The rover's wheels became partially embedded in the ripple at the end of a drive on Opportunity's 833rd Martian day, or sol (May 28, 2006). The images in this clip were taken on sols 836 through 841 (May 31 through June 5, 2006).

    Scientists and engineers who had been elated at the meters of progress the rover had been making in earlier drives were happy for even centimeters of advance per sol as they maneuvered their explorer through the slippery material of Jammerbugt. The wheels reached solid footing on a rock outcrop on the final sol of this sequence.

    The science and engineering teams appropriately chose the ripple's informal from name the name of a bay on the north coast of Denmark. Jammerbugt, or Jammerbugten, loosely translated, means Bay of Lamentation or Bay of Wailing. The shipping route from the North Sea to the Baltic passes Jammerbugt on its way around the northern tip of Jutland. This has always been an important trade route and many ships still pass by the bay. The prevailing wind directions are typically northwest to southwest with the strongest winds and storms tending to blow from the northwest. A northwesterly wind will blow straight into the Jammerbugt, towards shore. Therefore, in the age of sail, many ships sank there during storms. The shore is sandy, but can have strong waves, so running aground was very dangerous even though there are no rocks.

    Fortunately, Opportunity weathered its 'Jammerbugt' and is again on its way toward Victoria Crater.

  8. Inundation Mapping and Hazard Assessment of Tectonic and Landslide Tsunamis in Southeast Alaska

    NASA Astrophysics Data System (ADS)

    Suleimani, E.; Nicolsky, D.; Koehler, R. D., III

    2014-12-01

    The Alaska Earthquake Center conducts tsunami inundation mapping for coastal communities in Alaska, and is currently focused on the southeastern region and communities of Yakutat, Elfin Cove, Gustavus and Hoonah. This activity provides local emergency officials with tsunami hazard assessment, planning, and mitigation tools. At-risk communities are distributed along several segments of the Alaska coastline, each having a unique seismic history and potential tsunami hazard. Thus, a critical component of our project is accurate identification and characterization of potential tectonic and landslide tsunami sources. The primary tectonic element of Southeast Alaska is the Fairweather - Queen Charlotte fault system, which has ruptured in 5 large strike-slip earthquakes in the past 100 years. The 1958 "Lituya Bay" earthquake triggered a large landslide into Lituya Bay that generated a 540-m-high wave. The M7.7 Haida Gwaii earthquake of October 28, 2012 occurred along the same fault, but was associated with dominantly vertical motion, generating a local tsunami. Communities in Southeast Alaska are also vulnerable to hazards related to locally generated waves, due to proximity of communities to landslide-prone fjords and frequent earthquakes. The primary mechanisms for local tsunami generation are failure of steep rock slopes due to relaxation of internal stresses after deglaciation, and failure of thick unconsolidated sediments accumulated on underwater delta fronts at river mouths. We numerically model potential tsunami waves and inundation extent that may result from future hypothetical far- and near-field earthquakes and landslides. We perform simulations for each source scenario using the Alaska Tsunami Model, which is validated through a set of analytical benchmarks and tested against laboratory and field data. Results of numerical modeling combined with historical observations are compiled on inundation maps and used for site-specific tsunami hazard assessment by emergency planners.

  9. 75 FR 60689 - Hazardous Waste Management System; Identification and Listing of Hazardous Waste; Proposed Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-01

    ... exclude (or delist) a certain solid waste generated by its Beaumont, Texas, facility from the lists of hazardous wastes. EPA used the Delisting Risk Assessment Software (DRAS) Version 3.0 in the evaluation of... Waste Management System; Identification and Listing of Hazardous Waste; Proposed Rule AGENCY...

  10. CCD imaging system for the EUV solar telescope

    NASA Astrophysics Data System (ADS)

    Gong, Yan; Song, Qian; Ye, Bing-Xun

    2006-01-01

    In order to develop the detector adapted to the space solar telescope, we have built a CCD camera system capable of working in the extra ultraviolet (EUV) band, which is composed of one phosphor screen, one intensified system using a photocathode/micro-channel plate(MCP)/ phosphor, one optical taper and one chip of front-illuminated (FI) CCD without screen windows. All of them were stuck one by one with optical glue. The working principle of the camera system is presented; moreover we have employed the mesh experiment to calibrate and test the CCD camera system in 15~24nm, the position resolution of about 19 μm is obtained at the wavelength of 17.1nm and 19.5nm.

  11. Dual-camera design for coded aperture snapshot spectral imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  12. Evaluation of an airborne remote sensing platform consisting of two consumer-grade cameras for crop identification

    USDA-ARS?s Scientific Manuscript database

    Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications have not been well documented in related ...

  13. Researches on hazard avoidance cameras calibration of Lunar Rover

    NASA Astrophysics Data System (ADS)

    Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong

    2017-11-01

    Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.

  14. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows.

    PubMed

    Skupsch, C; Chaves, H; Brücker, C

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  15. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  16. High Resolution Photogrammetric Digital Elevation Models Across Calving Fronts and Meltwater Channels in Greenland

    NASA Astrophysics Data System (ADS)

    Le Bel, D. A.; Brown, S.; Zappa, C. J.; Bell, R. E.; Frearson, N.; Tinto, K. J.

    2014-12-01

    Photogrammetric digital elevation models (DEMs) are a powerful approach for understanding elevation change and dynamics along the margins of the large ice sheets. The IcePod system, mounted on a New York Air National Guard LC-130, can measure high-resolution surface elevations with a Riegl VQ580 scanning laser altimeter and Imperx Bobcat IGV-B6620 color visible-wavelength camera (6600x4400 resolution); the surface temperature with a Sofradir IRE-640L infrared camera (spectral response 7.7-9.5 μm, 640x512 resolution); and the structure of snow and ice with two radar systems. We show the use of IcePod imagery to develop DEMs across calving fronts and meltwater channels in Greenland. Multiple over-flights of the Kangerlussaq Airport ramp have provided a test of the technique at a location with accurate, independently-determined elevation. Here the photogrammetric DEM of the airport, constrained by ground control measurements, is compared with the Lidar results. In July 2014 the IcePod ice-ocean imaging system surveyed the calving fronts of five outlet glaciers north of Jakobshavn Isbrae. We used Agisoft PhotoScan to develop a DEM of each calving front using imagery captured by the IcePod systems. Adjacent to the ice sheet, meltwater plumes foster mixing in the fjord, moving warm ocean water into contact with the front of the ice sheet where it can undercut the ice front and trigger calving. The five glaciers provide an opportunity to examine the calving front structure in relation to ocean temperature, fjord circulation, and spatial scale of the meltwater plumes. The combination of the accurate DEM of the calving front and the thermal imagery used to constrain the temperature and dynamics of the adjacent plume provides new insights into the ice-ocean interactions. Ice sheet margins provide insights into the connections between the surface meltwater and the fate of the water at the ice sheet base. Surface meltwater channels are visualized here for the first time using the combination of Lidar, photogrammetry DEMs and infrared imagery. These techniques leverage electromagnetic surface properties that allow us to identify the presence of water, measure the slope and elevation of the channel, as well as the two-dimensional temperature variability of the water/ice/snow in multiple melt channels within a drainage system.

  17. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.

  18. The effects of microstructure on propagation of laser-driven radiative heat waves in under-dense high-Z plasma

    NASA Astrophysics Data System (ADS)

    Colvin, J. D.; Matsukuma, H.; Brown, K. C.; Davis, J. F.; Kemp, G. E.; Koga, K.; Tanaka, N.; Yogo, A.; Zhang, Z.; Nishimura, H.; Fournier, K. B.

    2018-03-01

    This work was motivated by previous findings that the measured laser-driven heat front propagation velocity in under-dense TiO2/SiO2 foams is slower than the simulated one [Pérez et al., Phys. Plasmas 21, 023102 (2014)]. In attempting to test the hypothesis that these differences result from effects of the foam microstructure, we designed and conducted an experiment on the GEKKO laser using an x-ray streak camera to compare the heat front propagation velocity in "equivalent" gas and foam targets, that is, targets that have the same initial density, atomic weight, and average ionization state. We first discuss the design and the results of this comparison experiment. To supplement the x-ray streak camera data, we designed and conducted an experiment on the Trident laser using a new high-resolution, time-integrated, spatially resolved crystal spectrometer to image the Ti K-shell spectrum along the laser-propagation axis in an under-dense TiO2/SiO2 foam cylinder. We discuss the details of the design of this experiment, and present the measured Ti K-shell spectra compared to the spectra simulated with a detailed superconfiguration non-LTE atomic model for Ti incorporated into a 2D radiation hydrodynamic code. We show that there is indeed a microstructure effect on heat front propagation in under-dense foams, and that the measured heat front velocities in the TiO2/SiO2 foams are consistent with the analytical model of Gus'kov et al. [Phys. Plasmas 18, 103114 (2011)].

  19. Source Camera Identification and Blind Tamper Detections for Images

    DTIC Science & Technology

    2007-04-24

    measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics

  20. 76 FR 63252 - Hazardous and Solid Waste Management System: Identification and Listing of Special Wastes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-12

    ...-2011-0392; FRL-9476-6] RIN 2050-AE81 Hazardous and Solid Waste Management System: Identification and... Protection Agency (Agency or EPA) in conjunction with the proposed rule: Hazardous and Solid Waste Management...-0392. (4) Mail: Send two copies of your comments to Hazardous and Solid Waste Management System...

  1. Opto-mechanical design and development status of an all spherical five lenses focal reducer for the 2.3 m Thai National Telescope

    NASA Astrophysics Data System (ADS)

    Buisset, Christophe; Prasit, Apirat; Lépine, Thierry; Poshyachinda, Saran; Soonthornthum, Boonrucksar; Deboos, Alexis

    2016-07-01

    The National Astronomical Research Institute (NARIT) is currently developing an all spherical five lenses focal reducer to image a FOV circular of diameter Δθ = 14.6' on the 4K camera with a pixel scale equal to 0.42''/pixel. The spatial resolution will be better than 1.2'' over the full visible spectral domain [400 nm, 800 nm]. The relative irradiance between the ghost and the science images will be lower than 10-4. The maximum distortion will be lower than 1% and the maximum angle of incidence on the filters will be equal to 8°. The focal reducer comprises 1 doublet L1 located at the fork entrance and 1 triplet L2 located in front of the camera. The doublet L1 will be mounted on a tip-tilt mount placed on a robotic sliding rail. L1 will thus be placed in the optical path during the observations with the 4K camera and will be removed during the observations with the other instruments. The triplet L2 will be installed on the instrument cube in front of the camera equipped with the filter wheel. The glass will be manufactured in a specialized company, the mechanical parts will be manufactured by using the NARIT Computer Numerical Control machine and the lenses will be integrated at NARIT. In this paper, we describe the optical and mechanical designs and we present the geometrical performance, the transmission budget and the results of the stray light analyses.

  2. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  3. Infrared Camera System for Visualization of IR-Absorbing Gas Leaks

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert; Immer, Christopher; Cox, Robert

    2010-01-01

    Leak detection and location remain a common problem in NASA and industry, where gas leaks can create hazardous conditions if not quickly detected and corrected. In order to help rectify this problem, this design equips an infrared (IR) camera with the means to make gas leaks of IR-absorbing gases more visible for leak detection and location. By comparing the output of two IR cameras (or two pictures from the same camera under essentially identical conditions and very closely spaced in time) on a pixel-by-pixel basis, one can cancel out all but the desired variations that correspond to the IR absorption of the gas of interest. This can be simply done by absorbing the IR lines that correspond to the gas of interest from the radiation received by one of the cameras by the intervention of a filter that removes the particular wavelength of interest from the "reference" picture. This can be done most sensitively with a gas filter (filled with the gas of interest) placed in front of the IR detector array, or (less sensitively) by use of a suitable line filter in the same location. This arrangement would then be balanced against the unfiltered "measurement" picture, which will have variations from IR absorption from the gas of interest. By suitable processing of the signals from each pixel in the two IR pictures, the user can display only the differences in the signals. Either a difference or a ratio output of the two signals is feasible. From a gas concentration viewpoint, the ratio could be processed to show the column depth of the gas leak. If a variation in the background IR light intensity is present in the field of view, then large changes in the difference signal will occur for the same gas column concentration between the background and the camera. By ratioing the outputs, the same signal ratio is obtained for both high- and low-background signals, even though the low-signal areas may have greater noise content due to their smaller signal strength. Thus, one embodiment would use a ratioed output signal to better represent the gas column concentration. An alternative approach uses a simpler multiplication of the filtered signal to make the filtered signal equal to the unfiltered signal at most locations, followed by a subtraction to remove all but the wavelength-specific absorption in the unfiltered sample. This signal processing can also reveal the net difference signal representing the leaking gas absorption, and allow rapid leak location, but signal intensity would not relate solely to gas absorption, as raw signal intensity would also affect the displayed signal. A second design choice is whether to use one camera with two images closely spaced in time, or two cameras with essentially the same view and time. The figure shows the two-camera version. This choice involves many tradeoffs that are not apparent until some detailed testing is done. In short, the tradeoffs involve the temporal changes in the field picture versus the pixel sensitivity curves and frame alignment differences with two cameras, and which system would lead to the smaller variations from the uncontrolled variables.

  4. Innovative Training for Occupational Health and Infection Control Workplace Assessment in Health Care

    ERIC Educational Resources Information Center

    O'Hara, Lyndsay; Bryce, Elizabeth Ann; Scharf, Sydney; Yassi, Annalee

    2012-01-01

    A user-friendly, high quality workplace assessment field guide and an accompanying worksheet are invaluable tools for recognizing hazards in the hospital environment. These tools ensure that both front line workers as well as health and safety and infection control professionals can systematically evaluate hazards and formulate recommendations.…

  5. Adapting the eButton to the abilities of children for diet assessment

    USDA-ARS?s Scientific Manuscript database

    Dietary assessment is fraught with error among adults and especially among children. Innovative technology may provide more accurate assessments of dietary intake. One recently available innovative method is a camera worn on the chest (called an eButton) that takes images of whatever is in front of ...

  6. Hand-Held Self-Maneuvering Unit to be used during EVA on Gemini 4

    NASA Image and Video Library

    1965-06-02

    Hand-Held Self-Maneuvering Unit to be used during extravehicular activity (EVA) on Gemini 4 flight. It is an integral unit that contains its own high pressure metering valves and nozzles required to produce controlled thrust. A camera is mounted on the front of the unit.

  7. ARC-2007-ACD07-0145-023

    NASA Image and Video Library

    2007-08-01

    NASA Officials gather at Ames Research Center to discuss Spaceship development progress. Constellation is developing the Orion spacecraft and Ares rockets to support an American return to the moon by 2020. (with front right, Eric James, NASA-EX on camera, Ed Schilling, NASA video producer in distance with Astrid Olson, NASA Ames PAO)

  8. Onufrienko with fresh fruit in the Zvezda SM, Expedition Four

    NASA Image and Video Library

    2002-01-16

    ISS004-E-6334 (January 2002) --- Cosmonaut Yury I. Onufrienko, Expedition Four mission commander representing Rosaviakosmos, is photographed in the Zvezda Service Module on the International Space Station (ISS). Apples and oranges are visible floating freely in front of Onufrienko. The image was taken with a digital still camera.

  9. Connecting Kids and Computers

    ERIC Educational Resources Information Center

    Giles, Rebecca McMahon

    2006-01-01

    Exposure to cell phones, DVD players, video games, computers, digital cameras, and iPods has made today's young people more technologically advanced than those of any previous generation. As a result, parents are now concerned that their children are spending too much time in front of the computer. In this article, the author focuses her…

  10. Commander Truly on aft flight deck holding communication kit assembly (ASSY)

    NASA Image and Video Library

    1983-09-05

    STS008-04-106 (30 Aug-5 Sept 1983) --- On aft flight deck, Richard M. Truly, STS-8 commander, holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the on orbit station. Hasselblad camera is positioned on overhead window W8.

  11. DEVELOPMENT OF AN IDENTIFICATION KIT FOR SPILLED HAZARDOUS MATERIALS

    EPA Science Inventory

    The Chemical Systems Laboratory (CSL) has developed a field kit to identify spilled hazardous materials in inland waters and on the ground. The Hazardous Materials Spills Identification Kit is a two-component kit consisting of an inverter/shortwave UV lamp unit for photochemical ...

  12. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel

    2012-06-01

    The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.

  13. Aquatic Toxicity Screening of an ACWA Secondary Waste, GB-Hydrolysate

    DTIC Science & Technology

    2009-01-01

    Toxicity Comparison for GB-hydrolysates, Acetone, and Malathion Using O’Bryan and Ross Chemical Scoring System for Hazard and Exposure Identification ...hydrolysates, Acetone, and Malathion Using O’Bryan and Ross Chemical Scoring System for Hazard and Exposure Identification (5) and the U.S. Fish and...WWTF) or a TSDF. The toxicity results were ranked using the Chemical Scoring System for Hazard and Exposure Identification (5). This system is

  14. People counting and re-identification using fusion of video camera and laser scanner

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Olivera, Santiago; Wagley, Raj

    2016-05-01

    We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.

  15. Reflective correctors for the Hubble Space Telescope axial instruments

    NASA Technical Reports Server (NTRS)

    Bottema, Murk

    1993-01-01

    Reflective correctors to compensate the spherical aberration in the Hubble Space Telescope are placed in front of three of the axial scientific instruments (a camera and two spectrographs) during the first scheduled refurbishment mission. The five correctors required are deployed from a new module that replaces the fourth axial instrument. Each corrector consists of a field mirror and an aspherical, aberration-correcting reimaging mirror. In the camera the angular resolution capability is restored, be it in reduced fields, and in the spectrographs the potential for observations in crowded areas is regained along with effective light collection at the slits.

  16. SFR test fixture for hemispherical and hyperhemispherical camera systems

    NASA Astrophysics Data System (ADS)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  17. The application of holography as a real-time three-dimensional motion picture camera

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.

    1973-01-01

    A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.

  18. Electronic recording of holograms with applications to holographic displays

    NASA Technical Reports Server (NTRS)

    Claspy, P. C.; Merat, F. L.

    1979-01-01

    The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.

  19. 40 CFR 263.11 - EPA identification number.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 27 2013-07-01 2013-07-01 false EPA identification number. 263.11... (CONTINUED) STANDARDS APPLICABLE TO TRANSPORTERS OF HAZARDOUS WASTE General § 263.11 EPA identification number. (a) A transporter must not transport hazardous wastes without having received an EPA...

  20. 40 CFR 263.11 - EPA identification number.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 27 2012-07-01 2012-07-01 false EPA identification number. 263.11... (CONTINUED) STANDARDS APPLICABLE TO TRANSPORTERS OF HAZARDOUS WASTE General § 263.11 EPA identification number. (a) A transporter must not transport hazardous wastes without having received an EPA...

  1. 40 CFR 263.11 - EPA identification number.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false EPA identification number. 263.11... (CONTINUED) STANDARDS APPLICABLE TO TRANSPORTERS OF HAZARDOUS WASTE General § 263.11 EPA identification number. (a) A transporter must not transport hazardous wastes without having received an EPA...

  2. 40 CFR 263.11 - EPA identification number.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 26 2014-07-01 2014-07-01 false EPA identification number. 263.11... (CONTINUED) STANDARDS APPLICABLE TO TRANSPORTERS OF HAZARDOUS WASTE General § 263.11 EPA identification number. (a) A transporter must not transport hazardous wastes without having received an EPA...

  3. 40 CFR 263.11 - EPA identification number.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 26 2011-07-01 2011-07-01 false EPA identification number. 263.11... (CONTINUED) STANDARDS APPLICABLE TO TRANSPORTERS OF HAZARDOUS WASTE General § 263.11 EPA identification number. (a) A transporter must not transport hazardous wastes without having received an EPA...

  4. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  5. [Evaluation of the efficacy of sentinel node detection in breast cancer: chronological course and influence of the incorporation of an intra-operative portable gamma camera].

    PubMed

    Goñi Gironés, E; Vicente García, F; Serra Arbeloa, P; Estébanez Estébanez, C; Calvo Benito, A; Rodrigo Rincón, I; Camarero Salazar, A; Martínez Lozano, M E

    2013-01-01

    To define the sentinel node identification rate in breast cancer, the chronological evolution of this parameter and the influence of the introduction of a portable gamma camera. A retrospective study was conducted using a prospective database of 754 patients who had undergone a sentinel lymph node biopsy between January 2003 and December 2011. The technique was mixed in the starting period and subsequently was performed with radiotracer intra-peritumorally administered the day before of the surgery. Until October 2009, excision of the sentinel node was guided by a probe. After that date, a portable gamma camera was introduced for intrasurgical detection. The SN was biopsied in 725 out of the 754 patients studied. The resulting technique global effectiveness was 96.2%. In accordance with the year of the surgical intervention, the identification percentage was 93.5% in 2003, 88.7% in 2004, 94.3% in 2005, 95.7% in 2006, 93.3% in 2007, 98.8% in 2008, 97.1% in 2009 and 99.1% in 2010 and 2011. There was a significant difference in the proportion of identification before and after the incorporation of the portable gamma camera of 4.6% (95% CI of the difference 2-7.2%, P = 0.0037). The percentage of global identification exceeds the recommended level following the current guidelines. Chronologically, the improvement for this parameter during the study period has been observed. These data suggest that the incorporation of a portable gamma camera had an important role. Copyright © 2013 Elsevier España, S.L. and SEMNIM. All rights reserved.

  6. Wind Tunnel Tests of the Space Shuttle Foam Insulation with Simulated Debonded Regions

    DTIC Science & Technology

    1981-04-01

    set identification number Gage sensitivity Calculated gage sen8itivity 82 = Sl * f(TGE) Material specimen identification designation Free-stream...ColoY motion pictures (2 cameras) and pre- and posttest color stills recorded ariy changes "in the samples. The movie cameras were operated at...The oBli ~ue shock wave generated by the -wedge reduces the free-stream Mach nut1ber to the desired local Mach number. Since the free=sti’eam

  7. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  8. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  9. Remotely sensed geology from lander-based to orbital perspectives: Results of FIDO rover May 2000 field tests

    USGS Publications Warehouse

    Jolliff, B.; Knoll, A.; Morris, R.V.; Moersch, J.; McSween, H.; Gilmore, M.; Arvidson, R.; Greeley, R.; Herkenhoff, K.; Squyres, S.

    2002-01-01

    Blind field tests of the Field Integration Design and Operations (FIDO) prototype Mars rover were carried out 7-16 May 2000. A Core Operations Team (COT), sequestered at the Jet Propulsion Laboratory without knowledge of test site location, prepared command sequences and interpreted data acquired by the rover. Instrument sensors included a stereo panoramic camera, navigational and hazard-avoidance cameras, a color microscopic imager, an infrared point spectrometer, and a rock coring drill. The COT designed command sequences, which were relayed by satellite uplink to the rover, and evaluated instrument data. Using aerial photos and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, and information from the rover sensors, the COT inferred the geology of the landing site during the 18 sol mission, including lithologic diversity, stratigraphic relationships, environments of deposition, and weathering characteristics. Prominent lithologic units were interpreted to be dolomite-bearing rocks, kaolinite-bearing altered felsic volcanic materials, and basalt. The color panoramic camera revealed sedimentary layering and rock textures, and geologic relationships seen in rock exposures. The infrared point spectrometer permitted identification of prominent carbonate and kaolinite spectral features and permitted correlations to outcrops that could not be reached by the rover. The color microscopic imager revealed fine-scale rock textures, soil components, and results of coring experiments. Test results show that close-up interrogation of rocks is essential to investigations of geologic environments and that observations must include scales ranging from individual boulders and outcrops (microscopic, macroscopic) to orbital remote sensing, with sufficient intermediate steps (descent images) to connect in situ and remote observations.

  10. Vehicle Re-Identification by Deep Hidden Multi-View Inference.

    PubMed

    Zhou, Yi; Liu, Li; Shao, Ling

    2018-07-01

    Vehicle re-identification (re-ID) is an area that has received far less attention in the computer vision community than the prevalent person re-ID. Possible reasons for this slow progress are the lack of appropriate research data and the special 3D structure of a vehicle. Previous works have generally focused on some specific views (e.g., front); but, these methods are less effective in realistic scenarios, where vehicles usually appear in arbitrary views to cameras. In this paper, we focus on the uncertainty of vehicle viewpoint in re-ID, proposing two end-to-end deep architectures: the Spatially Concatenated ConvNet and convolutional neural network (CNN)-LSTM bi-directional loop. Our models exploit the great advantages of the CNN and long short-term memory (LSTM) to learn transformations across different viewpoints of vehicles. Thus, a multi-view vehicle representation containing all viewpoints' information can be inferred from the only one input view, and then used for learning to measure distance. To verify our models, we also introduce a Toy Car RE-ID data set with images from multiple viewpoints of 200 vehicles. We evaluate our proposed methods on the Toy Car RE-ID data set and the public Multi-View Car, VehicleID, and VeRi data sets. Experimental results illustrate that our models achieve consistent improvements over the state-of-the-art vehicle re-ID approaches.

  11. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  12. Flash Flood Type Identification within Catchments in Beijing Mountainous Area

    NASA Astrophysics Data System (ADS)

    Nan, W.

    2017-12-01

    Flash flood is a common type of disaster in mountainous area, Flash flood with the feature of large flow rate, strong flushing force, destructive power, has periodically caused loss to life and destruction to infrastructure in mountainous area. Beijing as China's political, economic and cultural center, the disaster prevention and control work in Beijing mountainous area has always been concerned widely. According to the transport mechanism, sediment concentration and density, the flash flood type identification within catchment can provide basis for making the hazards prevention and mitigation policy. Taking Beijing as the study area, this paper extracted parameters related to catchment morphological and topography features respectively. By using Bayes discriminant, Logistic regression and Random forest, the catchments in Beijing mountainous area were divided into water floods process, fluvial sediment transport process and debris flows process. The results found that Logistic regression analysis showed the highest accuracy, with the overall accuracy of 88.2%. Bayes discriminant and Random forest had poor prediction effects. This study confirmed the ability of morphological and topography features to identify flash flood process. The circularity ratio, elongation ratio and roughness index can be used to explain the flash flood types effectively, and the Melton ratio and elevation relief ratio also did a good job during the identification, whereas the drainage density seemed not to be an issue at this level of detail. Based on the analysis of spatial patterns of flash flood types, fluvial sediment transport process and debris flow process were the dominant hazards, while the pure water flood process was much less. The catchments dominated by fluvial sediment transport process were mainly distributed in the Yan Mountain region, where the fault belts were relatively dense. The debris flow process prone to occur in the Taihang Mountain region thanks to the abundant coal gangues. The pure water flood process catchments were mainly distributed in the transitional mountain front.

  13. Extratropical Cyclone in the Southern Ocean

    NASA Technical Reports Server (NTRS)

    2001-01-01

    These images from the Multi-angle Imaging SpectroRadiometer portray an occluded extratropical cyclone situated in the Southern Ocean, about 650 kilometers south of the Eyre Peninsula, South Australia.

    Parts of the Yorke Peninsula and a portion of the Murray-Darling River basin are visible between the clouds near the top of the left-hand image, a true-color view from MISR's nadir(vertical-viewing) camera. Retrieved cloud-tracked wind velocities are indicated by the superimposed arrows. The image on the right displays cloud-top heights. Areas where cloud heights could not be retrieved are shown in black. Both the wind vectors and the cloud heights were derived using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate these results are still being refined, and future versions of these products may show modest changes.

    Extratropical cyclones are the dominant weather system at midlatitudes, and the term is used generically for region allow-pressure systems in the mid- to high-latitudes. In the southern hemisphere, cyclonic rotation is clockwise. These storms obtain their energy from temperature differences between air masses on either side of warm and cold fronts, and their characteristic pattern is of warm and cold fronts radiating out from a migrating low pressure center which forms, deepens, and dissipates as the fronts fold and collapse on each other. The center of this cyclone has started to decay, with the band of cloud to the south most likely representing the main front that was originally connected with the cyclonic circulation.

    These views were acquired on October 11, 2001 during Terra orbit 9650, and represent an area of about 380 kilometers x 1900 kilometers.

  14. Extratropical Cyclone in the Southern Ocean

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These images from the Multi-angle Imaging SpectroRadiometer (MISR) portray an occluded extratropical cyclone situated in the Southern Ocean, about 650 kilometers south of the Eyre Peninsula, South Australia. The left-hand image, a true-color view from MISR's nadir (vertical-viewing) camera, shows clouds just south of the Yorke Peninsula and the Murray-Darling river basin in Australia. Retrieved cloud-tracked wind velocities are indicated by the superimposed arrows. The image on the right displays cloud-top heights. Areas where cloud heights could not be retrieved are shown in black. Both the wind vectors and the cloud heights were derived using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate these results are still being refined, and future versions of these products may show modest changes. Extratropical cyclones are the dominant weather system at midlatitudes, and the term is used generically for regional low-pressure systems in the mid- to high-latitudes. In the southern hemisphere, cyclonic rotation is clockwise. These storms obtain their energy from temperature differences between air masses on either side of warm and cold fronts, and their characteristic pattern is of warm and cold fronts radiating out from a migrating low pressure center which forms, deepens, and dissipates as the fronts fold and collapse on each other. The center of this cyclone has started to decay, with the band of cloud to the south most likely representing the main front that was originally connected with the cyclonic circulation. These views were acquired on October 11, 2001, and the large view represents an area of about 380 kilometers x 1900 kilometers. Image courtesy NASA/GSFC/LaRC/JPL, MISR Team.

  15. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  16. SLR digital camera for forensic photography

    NASA Astrophysics Data System (ADS)

    Har, Donghwan; Son, Youngho; Lee, Sungwon

    2004-06-01

    Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.

  17. Watching the TV Audience.

    ERIC Educational Resources Information Center

    Collett, Peter

    Data were collected for this study of the relationship between television watching and family life via a recording device (C-Box) consisting of a television set and a video camera. Designed for the study, this device was installed in 20 homes for one week to record the viewing area in front of the television set together with information on…

  18. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data

    PubMed Central

    May, Stefan

    2018-01-01

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. PMID:29695098

  19. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data.

    PubMed

    Pfitzner, Christian; May, Stefan; Nüchter, Andreas

    2018-04-24

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients.

  20. Why are faces denser in the visual experiences of younger than older infants?

    PubMed Central

    Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.

    2017-01-01

    Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants declines over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested two opposing hypotheses about this observed age-related decline in the frequency of faces in infant views. By the People-input hypothesis, there are more faces in view for younger infants because people are more often physically in front of younger than older infants. This hypothesis predicts that not just faces but views of other body parts will decline with age. By the Face-input hypothesis, the decline is strictly about faces, not people or other body parts in general. Two experiments, one using a time-sampling method (84 infants 3 to 24 months in age) and the other analyses of head camera images (36 infants 1 to 24 months) provide strong support for the Face-input hypothesis. The results suggest developmental constraints on the environment that ensure faces are prevalent early in development. PMID:28026190

  1. Expedition Three, Expedition Two and STS-105 crews pose in the U.S. Laboratory

    NASA Image and Video Library

    2001-08-17

    ISS003-E-5169 (17 August 2001) --- The Expedition Three (white shirts), STS-105 (striped shirts), and Expedition Two (red shirts) crews assemble for a group photo in the Destiny laboratory on the International Space Station (ISS). The Expedition Three crew members are, from front to back, Frank L. Culbertson, Jr., mission commander; and cosmonauts Vladimir N. Dezhurov and Mikhail Tyurin, flight engineers; STS-105 crew members are, front row, Patrick G. Forrester and Daniel T. Barry, mission specialists, and back row, Scott J. Horowitz, commander, and Frederick W. (Rick) Sturckow, pilot; Expedition Two crew members are, from front to back, cosmonaut Yury V. Usachev, mission commander, James S. Voss and Susan J. Helms, flight engineers. Dezhurov, Tyurin and Usachev represent Rosaviakosmos. This image was taken with a digital still camera.

  2. Expedition Three, Expedition Two and STS-105 crews pose in the U.S. Laboratory

    NASA Image and Video Library

    2001-08-17

    ISS003-E-5168 (17 August 2001) --- The Expedition Three (white shirts), STS-105 (striped shirts), and Expedition Two (red shirts) crews assemble for a group photo in the Destiny laboratory on the International Space Station (ISS). The Expedition Three crew members are, from front to back, Frank L. Culbertson, Jr., mission commander; and cosmonauts Vladimir N. Dezhurov and Mikhail Tyurin, flight engineers; STS-105 crew members are, front row, Patrick G. Forrester and Daniel T. Barry, mission specialists, and back row, Scott J. Horowitz, commander, and Frederick W. (Rick) Sturckow, pilot; Expedition Two crew members are, from front to back, cosmonaut Yury V. Usachev, mission commander, James S. Voss and Susan J. Helms, flight engineers. Dezhurov, Tyurin and Usachev represent Rosaviakosmos. This image was taken with a digital still camera.

  3. Deep neural network features for horses identity recognition using multiview horses' face pattern

    NASA Astrophysics Data System (ADS)

    Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.

    2017-03-01

    To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in [5] a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.

  4. Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems.

    PubMed

    Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal

    2011-01-01

    This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006.

  5. Automatic Forest-Fire Measuring Using Ground Stations and Unmanned Aerial Systems

    PubMed Central

    Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal

    2011-01-01

    This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006. PMID:22163958

  6. Optical performance analysis of plenoptic camera systems

    NASA Astrophysics Data System (ADS)

    Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas

    2014-09-01

    Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.

  7. Improved camera for better X-ray powder photographs

    NASA Technical Reports Server (NTRS)

    Parrish, W.; Vajda, I. E.

    1969-01-01

    Camera obtains powder-type photographs of single crystals or polycrystalline powder specimens. X-ray diffraction photographs of a powder specimen are characterized by improved resolution and greater intensity. A reasonably good powder pattern of small samples can be produced for identification purposes.

  8. Lock-In Imaging System for Detecting Disturbances in Fluid

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Choi, Sang Hyouk (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Dimarcantonio, Albert L. (Inventor)

    2014-01-01

    A lock-in imaging system is configured for detecting a disturbance in air. The system includes an airplane, an interferometer, and a telescopic imaging camera. The airplane includes a fuselage and a pair of wings. The airplane is configured for flight in air. The interferometer is operatively disposed on the airplane and configured for producing an interference pattern by splitting a beam of light into two beams along two paths and recombining the two beams at a junction point in a front flight path of the airplane during flight. The telescopic imaging camera is configured for capturing an image of the beams at the junction point. The telescopic imaging camera is configured for detecting the disturbance in air in an optical path, based on an index of refraction of the image, as detected at the junction point.

  9. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  10. Astronaut Charles Duke works at front of Lunar Roving Vehicle

    NASA Image and Video Library

    1972-04-23

    AS16-116-18607 (23 April 1972) --- Astronaut Charles M. Duke Jr. works at the front of the Lunar Roving Vehicle (LRV) parked in this rock field at a North Ray Crater geological site during the mission's third extravehicular activity (EVA) on April 23, 1972. Astronaut John W. Young took this picture with a 70mm Hasselblad camera. While astronauts Young, commander; and Duke, lunar module pilot; descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.

  11. Disturbing Pop-Tart

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Sojourner rover's front right camera imaged Pop-tart, a small rock or indurated soil material which was pushed out of the surrounding drift material by Sojourner's front left wheel during a soil mechanics experiment.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

  12. Design and Construction of an X-ray Lightning Camera

    NASA Astrophysics Data System (ADS)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  13. Monitoring the morphological evolution of complex glaciers: the Planpincieux case-study (Mont Blanc - Aosta Valley)

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Curtaz, Michèle; Vagliasindi, Marco; Bertolo, Davide

    2014-05-01

    The Planpincieux Glacier (PG) is located on the Italian side of the Grandes Jorasses massif, Mont Blanc, Italy. This area is historically known for the occasional activation of ice falls events from the frontal part of the glacier. The PG is a so-called "polythermal" glacier, meaning that the liquid water present at contact between ice and the bedrock in the lower part of the glacier itself plays an important role in the glacier dynamics, and ice falls might occur in a sudden and unpredictable fashion. In this scenario, the accurate analysis of the glacier morphological evolution assumes a crucial role. Starting from 2012, within the framework of the regional plan for glaciers risk detection, a research project was set up to study the Planpincieux Glacier and evaluate the potential hazard concerning the possible activation of large ice or ice-snow avalanches triggered by icefall events in that area. Dynamics of such avalanches, as well as potentially endangered areas, have been evaluated in an expertise by the SLF Institute. Therefore, the availability of both qualitative information and quantitative measurements relevant to the glacier movements represented a primary goal. After a careful evaluation of several possible technical solutions to achieve displacement monitoring also based on the results of a preliminary study managed by the ETH Zurich (prof. M. Funk), we installed an experimental monitoring station located on the opposite side of the valley, at the top of the Mt. de la Saxe, ca. 3.5 km away from the main target. The monitoring station is composed of two modules, including: (i) a surveillance module, based on a medium resolution digital camera, observing large part of the slope; (ii) a photogrammetric module, based on a high resolution digital camera equipped with a 300mm optical zoom, pointed on the Planpincieux glacier front. At this stage, our analyses focused mainly on the qualitative assessment and recognition of impulsive phenomena affecting the glacier morphology, such as ice falls, changes in water circulation and/or snow precipitation. Moreover, we also considered pixel-offset techniques to measure the surface displacements occurring on the glacier front. Here we present the preliminary results obtained by processing the data acquired from the photogrammetric module starting from September 2013. The obtained results are encouraging.

  14. ETR COMPLEX. CAMERA FACING EAST. FROM LEFT TO RIGHT: ETRCRITICAL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ETR COMPLEX. CAMERA FACING EAST. FROM LEFT TO RIGHT: ETR-CRITICAL FACILITY BUILDING, ETR CONTROL BUILDING (ATTACHED TO HIGH-BAY ETR), ETR, ONE-STORY SECTION OF ETR BUILDING, ELECTRICAL BUILDING, COOLING TOWER PUMP HOUSE, COOLING TOWER. COMPRESSOR AND HEAT EXCHANGER BUILDING ARE PARTLY IN VIEW ABOVE ETR. DARK-COLORED DUCTS PROCEED FROM GROUND CONNECTION TO ETR WASTE GAS STACK. OTHER STACK IS MTR STACK WITH FAN HOUSE IN FRONT OF IT. RECTANGULAR STRUCTURE NEAR TOP OF VIEW IS SETTLING BASIN. INL NEGATIVE NO. 56-4102. Unknown Photographer, ca. 1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  15. Spirit Feels Dust Gust

    NASA Technical Reports Server (NTRS)

    2007-01-01

    On sol 1149 (March 28, 2007) of its mission, NASA's Mars Exploration Rover Spirit caught a wind gust with its navigation camera. A series of navigation camera images were strung together to create this movie. The front of the gust is observable because it was strong enough to lift up dust. From assessing the trajectory of this gust, the atmospheric science team concludes that it is possible that it passed over the rover. There was, however, no noticeable increase in power associated with this gust. In the past, dust devils and gusts have wiped the solar panels of dust, making it easier for the solar panels to absorb sunlight.

  16. Geomatic methods applied to the study of the front position changes of Johnsons and Hurd Glaciers, Livingston Island, Antarctica, between 1957 and 2013

    NASA Astrophysics Data System (ADS)

    Rodríguez Cielos, Ricardo; Aguirre de Mata, Julián; Díez Galilea, Andrés; Álvarez Alonso, Marina; Rodríguez Cielos, Pedro; Navarro Valero, Francisco

    2016-08-01

    Various geomatic measurement techniques can be efficiently combined for surveying glacier fronts. Aerial photographs and satellite images can be used to determine the position of the glacier terminus. If the glacier front is easily accessible, the classic surveys using theodolite or total station, GNSS (Global Navigation Satellite System) techniques, laser-scanner or close-range photogrammetry are possible. When the accessibility to the glacier front is difficult or impossible, close-range photogrammetry proves to be useful, inexpensive and fast. In this paper, a methodology combining photogrammetric methods and other techniques is applied to determine the calving front position of Johnsons Glacier. Images taken in 2013 with an inexpensive nonmetric digital camera are georeferenced to a global coordinate system by measuring, using GNSS techniques, support points in accessible areas close to the glacier front, from which control points in inaccessible points on the glacier surface near its calving front are determined with theodolite using the direct intersection method. The front position changes of Johnsons Glacier during the period 1957-2013, as well as those of the land-terminating fronts of Argentina, Las Palmas and Sally Rocks lobes of Hurd glacier, are determined from different geomatic techniques such as surface-based GNSS measurements, aerial photogrammetry and satellite optical imagery. This provides a set of frontal positions useful, e.g., for glacier dynamics modeling and mass balance studies.Link to the data repository: https://doi.pangaea.de/10.1594/PANGAEA.845379.

  17. Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation

    USGS Publications Warehouse

    Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.

    2003-01-01

    The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.

  18. Atmospheric fronts in current and future climates

    NASA Astrophysics Data System (ADS)

    Catto, J. L.; Nicholls, N.; Jakob, C.; Shelton, K. L.

    2014-11-01

    Atmospheric fronts are important for the day-to-day variability of weather in the midlatitudes. It is therefore vital to know how their distribution and frequency will change in a projected warmer climate. Here we apply an objective front identification method, based on a thermal front parameter, to 6-hourly data from models participating in Coupled Model Intercomparison Project phase 5. The historical simulations are evaluated against ERA-Interim and found to produce a similar frequency of fronts and with similar front strength. The models show some biases in the location of the front frequency maxima. Future changes are estimated using the high emissions scenario simulations (Representative Concentration Pathway 8.5). Projections show an overall decrease in front frequency in the Northern Hemisphere, with a poleward shift of the maxima of front frequency and a strong decrease at high latitudes where the temperature gradient is decreased. The Southern Hemisphere shows a poleward shift of the frequency maximum, consistent with previous storm track studies.

  19. Functional Role of the Front and Back Legs During a Track Start with Special Reference to an Inverted Pendulum Model in College Swimmers.

    PubMed

    Ikeda, Yusuke; Ichikawa, Hiroshi; Nara, Rio; Baba, Yasuhiro; Shimoyama, Yoshimitsu; Kubo, Yasuyuki

    2016-10-01

    This study investigated factors that determine the velocity of the center of mass (CM) and flight distance from a track start to devise effective technical and physical training methods. Nine male and 5 female competitive swimmers participated in this study. Kinematics and ground reaction forces of the front and back legs were recorded using a video camera and force plates. The track start was modeled as an inverted pendulum system including a compliant leg, connecting the CM and front edge of the starting block. The increase in the horizontal velocity of the CM immediately after the start signal was closely correlated with the rotational component of the inverted pendulum. This rotational component at hands-off was significantly correlated with the average vertical force of the back plate from the start signal to hands-off (r = .967, P < .001). The flight distance / height was significantly correlated with the average vertical force of the front plate from the back foot-off to front foot-off (r = .783, P < .01). The results indicate that the legs on the starting block in the track start play a different role in the behavior of the inverted pendulum.

  20. Experimental and computational investigation of microwave interferometry (MI) for detonation front characterization

    NASA Astrophysics Data System (ADS)

    Mays, Owen; Tringe, Joe; Souers, Clark; Lauderbach, Lisa; Baluyot, Emer; Converse, Mark; Kane, Ron

    2017-06-01

    Microwave interferometry (MI) presents several advantages over more traditional existing shock and deflagration front diagnostics. Most importantly, it directly interrogates these fronts, instead of measuring the evolution of containment surfaces or explosive edges. Here we present the results of MI measurements on detonator-initiated cylinder tests, as well as on deflagration-to-detonation transition experiments, with emphasis on optimization of signal strength through coupling devices and through microwave-transparent windows. Full-wave electromagnetic field finite element simulations were employed to better understand microwave coupling into porous and near full theoretical maximum density (TMD) explosives. HMX and TATB-based explosives were investigated. Data was collected simultaneously at 26.5 GHz and 39 GHz, allowing for direct comparison of the front characteristics and providing insight into the dielectric properties of explosives at these high frequencies. MI measurements are compared against detonation velocity results from photonic Doppler velocimetry probes and high speed cameras, demonstrating the accuracy of the MI technique. Our results illustrate features of front propagation behavior that are difficult to observe with other techniques. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. Front blind spot crashes in Hong Kong.

    PubMed

    Cheng, Yuk Ki; Wong, Koon Hung; Tao, Chi Hang; Tam, Cheok Ning; Tam, Yiu Yan; Tsang, Cheuk Nam

    2016-09-01

    In 2012-2014, our laboratory had investigated a total of 9 suspected front blind spot crashes, in which the medium and heavy goods vehicles pulled away from rest and rolled over the pedestrians, who were crossing immediately in front of the vehicles. The drivers alleged that they did not see any pedestrians through the windscreens or the front blind spot mirrors. Forensic assessment of the goods vehicles revealed the existence of front blind spot zones in 3 out of these 9 accident vehicles, which were attributed to the poor mirror adjustments or even the absence of a front blind spot mirror altogether. In view of this, a small survey was devised involving 20 randomly selected volunteers and their goods vehicles and 5 out of these vehicles had blind spots at the front. Additionally, a short questionnaire was conducted on these 20 professional lorry drivers and it was shown that most of them were not aware of the hazards of blind spots immediately in front of their vehicles, and many did not use the front blind spot mirrors properly. A simple procedure for quick measurements of the coverage of front blind spot mirrors using a coloured plastic mat with dimensional grids was also introduced and described in this paper. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Understanding the Function of Circular Polarisation Vision in Mantis Shrimps: Building a C-Pol Camera

    DTIC Science & Technology

    2008-10-24

    instrumentation from hand-held to remote sensing (RS) and used to address problems such as coral bleaching or algal blooms. Very recent work has...Fig.1 A mantis shrimp looking out from the front entrance of its burrow. This and other species live on coral reefs and in other shallow

  3. Guidoni in front of Node 1/Unity hatch

    NASA Image and Video Library

    2001-04-27

    ISS002-E-6128 (27 April 2001) --- Umberto Guidoni of the European Space Agency (ESA), STS-100 mission specialist, poses for a photograph in Unity Node 1 as the hatch to the Multipurpose Logistics Module (MPLM) Raphaello is being closed near the end of the STS-100 mission. The image was taken with a digital still camera.

  4. Why Are Faces Denser in the Visual Experiences of Younger than Older Infants?

    ERIC Educational Resources Information Center

    Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.

    2017-01-01

    Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants "declines" over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested 2 opposing hypotheses about this observed…

  5. Measuring fire behavior with photography

    Treesearch

    Hubert B. Clements; Darold E. Ward; Carl W. Adkins

    1983-01-01

    Photography is practical for recording and measuring some aspects of forest fire behavior if the scale and perspective can be determined. This paper describes a photogrammetric method for measuring flame height and rate of spread for fires on flat terrain. The flames are photographed at known times with a camera in front of the advancing fire. Scale and perspective of...

  6. Center of parcel with mosaics. Mosaics consist of everyday throwaway ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Center of parcel with mosaics. Mosaics consist of everyday throwaway objects of all kinds set in concrete mortar on ground. Leaning Tower of Bottle Village in front of Rumpus Room primary façade with 12' scale (in tenths). Camera facing north. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  7. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2017-12-08

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. KSC-02pd1374

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  10. KSC-02pd1376

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  11. KSC-02pd1375

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  12. A Preliminary Study of Flame Propagation in a Spark-ignition Engine

    NASA Technical Reports Server (NTRS)

    Rothrock, A M; Spencer, R C

    1937-01-01

    The N.A.C.A. combustion apparatus was altered to operate as a fuel-injection, spark-ignition engine, and a preliminary study was made of the combustion of gasoline-air mixtures at various air-fuel ratios. Air-fuel ratios ranging from 10 to 21.6 were investigated. Records from an optical indicator and films from a high-speed motion-picture camera were the chief sources of data. Schlieren photography was used for an additional study. The results show that the altered combustion apparatus has characteristics similar to those of a conventional spark-ignition engine and should be useful in studying phenomena in spark-ignition engines. The photographs show the flame front to be irregularly shaped rather than uniformly curved. With a theoretically correct mixture the reaction, as indicated by the photographs, is not completed in the flame front but continues for some time after the combustion front has traversed the mixture.

  13. Plasma ignition for laser propulsion

    NASA Technical Reports Server (NTRS)

    Askew, R. F.

    1982-01-01

    For a specific optical system a pulsed carbon dioxide laser having an energy output of up to 15 joules was used to initiate a plasma in air at one atmosphere pressure. The spatial and temporal development of the plasma were measured using a multiframe image converter camera. In addition the time dependent velocity of the laser supported plasma front which moves opposite to the direction of the laser pulse was measured in order to characterize the type of wavefront developed. Reliable and reproducible spark initiation was achieved. The lifetime of the highly dense plasma at the initial focal spot was determined to be less than 100 nanoseconds. The plasma front propagates toward the laser at a variable speed ranging from zero to 1.6 x 1,000,000 m/sec. The plasma front propagates for a total distance of approximately five centimeters for the energy and laser pulse shape employed.

  14. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  15. An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera

    NASA Astrophysics Data System (ADS)

    Kumar, K. S. Chidanand; Bhowmick, Brojeshwar

    A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.

  16. FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathke, P.M.

    1993-09-01

    The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less

  17. QWIP technology for both military and civilian applications

    NASA Astrophysics Data System (ADS)

    Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.

    2001-10-01

    Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.

  18. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  19. 48 CFR 23.302 - Policy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (Material Safety Data Sheet, Preparation and Submission of) includes criteria for identification of... WORKPLACE Hazardous Material Identification and Material Safety Data 23.302 Policy. (a) The Occupational.... Accordingly, offerors and contractors are required to submit hazardous materials data whenever the supplies...

  20. 48 CFR 23.302 - Policy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Material Safety Data Sheet, Preparation and Submission of) includes criteria for identification of... WORKPLACE Hazardous Material Identification and Material Safety Data 23.302 Policy. (a) The Occupational.... Accordingly, offerors and contractors are required to submit hazardous materials data whenever the supplies...

  1. The Effect of a Pre-Lens Aperture on the Temperature Range and Image Uniformity of Microbolometer Infrared Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinwiddie, Ralph Barton; Parris, Larkin S.; Lindal, John M.

    This paper explores the temperature range extension of long-wavelength infrared (LWIR) cameras by placing an aperture in front of the lens. An aperture smaller than the lens will reduce the radiance to the sensor, allowing the camera to image targets much hotter than typically allowable. These higher temperatures were accurately determined after developing a correction factor which was applied to the built-in temperature calibration. The relationship between aperture diameter and temperature range is linear. The effect of pre-lens apertures on the image uniformity is a form of anti-vignetting, meaning the corners appear brighter (hotter) than the rest of the image.more » An example of using this technique to measure temperatures of high melting point polymers during 3D printing provide valuable information of the time required for the weld-line temperature to fall below the glass transition temperature.« less

  2. NAOMI instrument: a product line of compact and versatile cameras designed for HR and VHR missions in Earth observation

    NASA Astrophysics Data System (ADS)

    Luquet, Ph.; Brouard, L.; Chinal, E.

    2017-11-01

    Astrium has developed a product line of compact and versatile instruments for HR and VHR missions in Earth Observation. These cameras consist on a Silicon Carbide Korsch-type telescope, a focal plane with one or several retina modules - including five lines CCD, optical filters and front end electronics - and the instrument main electronics. Several versions have been developed with a telescope pupil diameter from 200 mm up to 650 mm, covering a large range of GSD (from 2.5 m down to sub-metric) and swath (from 10km up to 30 km) and compatible with different types of platform. Nine cameras have already been manufactured for five different programs: ALSAT2 (Algeria), SSOT (Chile), SPOT6 & SPOT7 (France), KRS (Kazakhstan) and VNREDSat (Vietnam). Two of them have already been launched and are delivering high quality images.

  3. Acute gastroenteritis and video camera surveillance: a cruise ship case report.

    PubMed

    Diskin, Arthur L; Caro, Gina M; Dahl, Eilif

    2014-01-01

    A 'faecal accident' was discovered in front of a passenger cabin of a cruise ship. After proper cleaning of the area the passenger was approached, but denied having any gastrointestinal symptoms. However, when confronted with surveillance camera evidence, she admitted having the accident and even bringing the towel stained with diarrhoea back to the pool towels bin. She was isolated until the next port where she was disembarked. Acute gastroenteritis (AGE) caused by Norovirus is very contagious and easily transmitted from person to person on cruise ships. The main purpose of isolation is to avoid public vomiting and faecal accidents. To quickly identify and isolate contagious passengers and crew and ensure their compliance are key elements in outbreak prevention and control, but this is difficult if ill persons deny symptoms. All passenger ships visiting US ports now have surveillance video cameras, which under certain circumstances can assist in finding potential index cases for AGE outbreaks.

  4. Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns.

    PubMed

    Baldini, Gianmarco; Dimc, Franc; Kamnik, Roman; Steri, Gary; Giuliani, Raimondo; Gentile, Claudio

    2017-04-06

    We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice.

  5. Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns

    PubMed Central

    Baldini, Gianmarco; Dimc, Franc; Kamnik, Roman; Steri, Gary; Giuliani, Raimondo; Gentile, Claudio

    2017-01-01

    We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice. PMID:28383482

  6. Hazard identification by methods of animal-based toxicology.

    PubMed

    Barlow, S M; Greig, J B; Bridges, J W; Carere, A; Carpy, A J M; Galli, C L; Kleiner, J; Knudsen, I; Koëter, H B W M; Levy, L S; Madsen, C; Mayer, S; Narbonne, J-F; Pfannkuch, F; Prodanchuk, M G; Smith, M R; Steinberg, P

    2002-01-01

    This paper is one of several prepared under the project "Food Safety In Europe: Risk Assessment of Chemicals in Food and Diet" (FOSIE), a European Commission Concerted Action Programme, organised by the International Life Sciences Institute, Europe (ILSI). The aim of the FOSIE project is to review the current state of the science of risk assessment of chemicals in food and diet, by consideration of the four stages of risk assessment, that is, hazard identification, hazard characterisation, exposure assessment and risk characterisation. The contribution of animal-based methods in toxicology to hazard identification of chemicals in food and diet is discussed. The importance of first applying existing technical and chemical knowledge to the design of safety testing programs for food chemicals is emphasised. There is consideration of the presently available and commonly used toxicity testing approaches and methodologies, including acute and repeated dose toxicity, reproductive and developmental toxicity, neurotoxicity, genotoxicity, carcinogenicity, immunotoxicity and food allergy. They are considered from the perspective of whether they are appropriate for assessing food chemicals and whether they are adequate to detect currently known or anticipated hazards from food. Gaps in knowledge and future research needs are identified; research on these could lead to improvements in the methods of hazard identification for food chemicals. The potential impact of some emerging techniques and toxicological issues on hazard identification for food chemicals, such as new measurement techniques, the use of transgenic animals, assessment of hormone balance and the possibilities for conducting studies in which common human diseases have been modelled, is also considered.

  7. Opportunity Rolls Free Again (Four Wheels)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This animated piece illustrates the recent escape of NASA's Mars Exploration Rover Opportunity from dangerous, loose material on the vast plains leading to the rover's next long-term target, 'Victoria Crater.'

    A series of images from the front and rear hazard-avoidance cameras make up this brief movie chronicling the challenge Opportunity faced to free itself from the ripple dubbed 'Jammerbugt.' Each quadrant shows one of the rover's four corner wheels: left front wheel in upper left, right front wheel in upper right, rear wheels in the lower quadrants. The wheels became partially embedded in the ripple at the end of a drive on Opportunity's 833rd Martian day, or sol (May 28, 2006). The images in this clip were taken on sols 836 through 841 (May 31 through June 5, 2006).

    Scientists and engineers who had been elated at the meters of progress the rover had been making in earlier drives were happy for even centimeters of advance per sol as they maneuvered their explorer through the slippery material of Jammerbugt. The wheels reached solid footing on a rock outcrop on the final sol of this sequence.

    The science and engineering teams appropriately chose the ripple's informal from name the name of a bay on the north coast of Denmark. Jammerbugt, or Jammerbugten, loosely translated, means Bay of Lamentation or Bay of Wailing. The shipping route from the North Sea to the Baltic passes Jammerbugt on its way around the northern tip of Jutland. This has always been an important trade route and many ships still pass by the bay. The prevailing wind directions are typically northwest to southwest with the strongest winds and storms tending to blow from the northwest. A northwesterly wind will blow straight into the Jammerbugt, towards shore. Therefore, in the age of sail, many ships sank there during storms. The shore is sandy, but can have strong waves, so running aground was very dangerous even though there are no rocks.

    Fortunately, Opportunity weathered its 'Jammerbugt' and is again on its way toward Victoria Crater.

  8. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  9. Measurements of the ablation-front trajectory and low-mode nonuniformity in direct-drive implosions using x-ray self-emission shadowgraphy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, D. T.; Davis, A. K.; Armstrong, W.

    Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less

  10. Measurements of the ablation-front trajectory and low-mode nonuniformity in direct-drive implosions using x-ray self-emission shadowgraphy

    DOE PAGES

    Michel, D. T.; Davis, A. K.; Armstrong, W.; ...

    2015-07-08

    Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less

  11. A UAV System for Observing Volcanoes and Natural Hazards

    NASA Astrophysics Data System (ADS)

    Saggiani, G.; Persiani, F.; Ceruti, A.; Tortora, P.; Troiani, E.; Giuletti, F.; Amici, S.; Buongiorno, M.; Distefano, G.; Bentini, G.; Bianconi, M.; Cerutti, A.; Nubile, A.; Sugliani, S.; Chiarini, M.; Pennestri, G.; Petrini, S.; Pieri, D.

    2007-12-01

    Fixed or rotary wing manned aircraft are currently the most commonly used platforms for airborne reconnaissance in response to natural hazards, such as volcanic eruptions, oil spills, wild fires, earthquakes. Such flights are very often undertaken in hazardous flying conditions (e.g., turbulence, downdrafts, reduced visibility, close proximity to dangerous terrain) and can be expensive. To mitigate these two fundamental issues-- safety and cost--we are exploring the use of small (less than 100kg), relatively inexpensive, but effective, unmanned aerial vehicles (UAVs) for this purpose. As an operational test, in 2004 we flew a small autonomous UAV in the airspace above and around Stromboli Volcano. Based in part on this experience, we are adapting the RAVEN UAV system for such natural hazard surveillance missions. RAVEN has a 50km range, with a 3.5m wingspan, main fuselage length of 4.60m, and maximum weight of 56kg. It has autonomous flight capability and a ground control Station for the mission planning and control. It will carry a variety of imaging devices, including a visible camera, and an IR camera. It will also carry an experimental Fourier micro-interferometer based on MOEMS technology, (developed by IMM Institute of CNR), to detect atmospheric trace gases. Such flexible, capable, and easy-to-deploy UAV systems may significantly shorten the time necessary to characterize the nature and scale of the natural hazard threats if used from the outset of, and systematically during, natural hazard events. When appropriately utilized, such UAVs can provide a powerful new hazard mitigation and documentation tool for civil protection hazard responders. This research was carried out under the auspices of the Italian government, and, in part, under contract to NASA at the Jet Propulsion Laboratory.

  12. USE OF EXISTING DATABASES FOR THE PURPOSE OF HAZARD IDENTIFICATION: AN EXAMPLE

    EPA Science Inventory

    Keywords: existing databases, hazard identification, cancer mortality, birth malformations

    Background: Associations between adverse health effects and environmental exposures are difficult to study, because exposures may be widespread, low-dose in nature, and common thro...

  13. RealityFlythrough: Enhancing Situational Awareness for Medical Response to Disasters Using Ubiquitous Video

    PubMed Central

    McCurdy, Neil J.; Griswold, William G; Lenert, Leslie A.

    2005-01-01

    The first moments at a disater scene are chaotic. The command center initially operates with little knowledge of hazards, geography and casualties, building up knowledge of the event slowly as information trickles in by voice radio channels. RealityFlythrough is a tele-presence system that stitches together live video feeds in real-time, using the principle of visual closure, to give command center personnel the illusion of being able to explore the scene interactively by moving smoothly between the video feeds. Using RealityFlythrough, medical, fire, law enforcement, hazardous materials, and engineering experts may be able to achieve situational awareness earlier, and better manage scarce resources. The RealityFlythrough system is composed of camera units with off-the-shelf GPS and orientation systems and a server/viewing station that offers access to images collected by the camera units in real time by position/orientation. In initial field testing using an experimental mesh 802.11 wireless network, two camera unit operators were able to create an interactive image of a simulated disaster scene in about five minutes. PMID:16779092

  14. Assessing Subaqueous Mudflow Hazard on the Mississippi River Delta Front, Part 1: A Historical Perspective on Mississippi River Delta Front Sedimentation

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Bentley, S. J.; Obelcz, J.; Xu, K.; Miner, M. D.; Georgiou, I. Y.; Hanegan, K.; Keller, G.

    2014-12-01

    Subaqueous mudflows are known to be ubiquitous across the Mississippi River delta front (MRDF) and have been identified as a hazard to offshore infrastructure. Among other factors, sediment accumulation rates and patterns play an important role in governing the stability of delta front sediment. High sedimentation rates result in underconsolidation, slope steepening, and increased biogenic gas production, which are all known to decrease stability. Sedimentation rates are highly variable across the MRDF, but are highest near the mouth of Southwest Pass, which carries the largest percentage of Mississippi River sediment into the Gulf of Mexico. Since the 1950s, the sediment load of the Mississippi River has decreased by ~50% due to dam construction upstream. The impact of this decreased sediment load on MRDF mudflow dynamics has yet to be examined. We compiled MRDF bathymetric datasets, including historical charts, industry and academic surveys, and NOAA data, collected between 1764 and 2009, in order to identify historic trends in sedimentation patterns. The progradation of Southwest Pass (measured at 10 m depth contour) has slowed from ~66 m/yr between 1764 and 1940 to ~25 m/yr between 1940 and 1979, with evidence of further deceleration from 1979-2009. Decreased rates of progradation are also observed at South Pass and Pass A Loutre. Advancement of the delta also decelerated in deeper water (15-90 m) offshore from Southwest Pass. In this area, from 1940-1979, depth contours advanced seaward ~25 m/yr, but did not advance from 1979-2005. Furthermore, over the same area and time ranges, the sediment accumulation rate decreased by ~82%. We expect these sedimentation trends are occurring across the delta front, with potential impacts on spatial and temporal patterns of subaqueous mudflows. The MRDF appears to be entering a phase of decline, which will likely be accelerated by future upstream sediment diversion projects. New geophysical data will be required to assess potential mudflow hazards associated with new MRDF sedimentation rates and patterns (See Part 2, Obelcz et al.).

  15. Gidzenko in front of flight deck windows

    NASA Image and Video Library

    2001-03-12

    STS102-E-5138 (12 March 2001) --- Cosmonaut Yuri P. Gidzenko, now a member of the STS-102 crew, on Discovery's flight deck. Lake Nasser, in Egypt, can be seen through the overhead flight deck window in the background. Gidzenko, representing Rosaviakosmos, had been onboard the International Space Station (ISS) since early November 2000. The photograph was taken with a digital still camera.

  16. Glenn in his sleep rack on the Discovery's middeck

    NASA Image and Video Library

    1998-10-29

    STS095-E-5032 (10-29-98) --- During Flight Day 1 activties, U.S. Sen. John H. Glenn Jr. takes part in his assigned medical studies as a payload specialist onboard the Space Shuttle Discovery. A flashlight floats in front of Sen. Glenn. The photo was made with an electronic still camera (ESC) at ll:48:35 GMT, Oct. 29.

  17. Sojourner Rover View of Pathfinder Lander

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Image of Pathfinder Lander on Mars taken from Sojourner Rover left front camera on sol 33. The IMP (on the lattice mast) is looking at the rover. Airbags are prominent, and the meteorology mast is shown to the right. Lowermost rock is Ender, with Hassock behind it and Yogi on the other side of the lander.

    NOTE: original caption as published in Science Magazine

  18. Nonpermissive HLA-DPB1 disparity is a significant independent risk factor for mortality after unrelated hematopoietic stem cell transplantation.

    PubMed

    Crocchiolo, Roberto; Zino, Elisabetta; Vago, Luca; Oneto, Rosi; Bruno, Barbara; Pollichieni, Simona; Sacchi, Nicoletta; Sormani, Maria Pia; Marcon, Jessica; Lamparelli, Teresa; Fanin, Renato; Garbarino, Lucia; Miotti, Valeria; Bandini, Giuseppe; Bosi, Alberto; Ciceri, Fabio; Bacigalupo, Andrea; Fleischhauer, Katharina

    2009-08-13

    The importance of donor-recipient human leukocyte antigen (HLA)-DPB1 matching for the clinical outcome of unrelated hematopoietic stem cell transplantation (HSCT) is controversial. We have previously described an algorithm for nonpermissive HLA-DPB1 disparities involving HLA-DPB1*0901,*1001,*1701,*0301,*1401,*4501, based on T-cell alloreactivity patterns. By revisiting the immunogenicity of HLA-DPB1*02, a modified algorithm was developed and retrospectively tested in 621 unrelated HSCTs facilitated through the Italian Registry for oncohematologic adult patients. The modified algorithm proved to be markedly more predictive of outcome than the original one, with significantly higher Kaplan-Meier probabilities of 2-year survival in permissive compared with nonpermissive transplantations (55% vs 39%, P = .005). This was the result of increased adjusted hazards of nonrelapse mortality (hazard ratio [HR] = 1.74; confidence interval [CI], 1.19-2.53; P = .004) but not of relapse (HR = 1.02; CI, 0.73-1.42; P = .92). The increase in the hazards of overall mortality by nonpermissive HLA-DPB1 disparity was similar in 10 of 10 (HR = 2.12; CI, 1.23-3.64; P = .006) and 9 of 10 allele-matched transplantations (HR = 2.21; CI, 1.28-3.80; P = .004), both in early-stage and in advanced-stage disease. These data call for revisiting current HLA matching strategies for unrelated HSCT, suggesting that searches should be directed up-front toward identification of HLA-DPB1 permissive, 10 of 10 or 9 of 10 matched donors.

  19. Identification and ecology of old ponderosa pine trees in the Colorado Front Range

    Treesearch

    Laurie Stroh Huckaby; Merrill R. Kaufmann; Paula J. Fornwalt; Jason M. Stoker; Chuck Dennis

    2003-01-01

    We describe the distinguishing physical characteristics of old ponderosa pine trees in the Front Range of Colorado, the processes that tend to preserve them, their past and present ecological significance, and their role in ecosystem restoration. Photographs illustrate identifying features of old ponderosa pines and show how to differentiate them from mature and young...

  20. Visual monitoring of the melting front propagation in a paraffin-based PCM

    NASA Astrophysics Data System (ADS)

    Charvát, Pavel; Štětina, Josef; Mauder, Tomáš; Klimeš, Lubomír

    Experiments were carried out in an environmental chamber with the aim to monitor the melting front propagation in a rectangular cavity filled with a paraffin-based Phase Change Material (PCM). The PCM was contained in transparent containers with the heat flux introduced by means of an electric heating element. The stabilized power source was used to maintain the constant heat output of the heating elements. The experiments were performed for the heat flux introduced at the side wall of the container and at the upper surface of the PCM. The paraffin-based PCM RT28HC with the phase change temperature of 28 °C was used in the experiments. The temperature in the environmental chamber was maintained at the melting temperature of the PCM. The propagation of the melting front was monitored with a digital camera and temperatures at several locations were monitored with RTDs and thermocouples. Significant natural convection was observed for the heat flux introduced at the side wall of the container. As a result the melting front propagated much faster at the top of the container than at its bottom. The heat flux introduced at the upper-surface of the PCM resulted in almost one-dimensional propagation of the melting front. The acquired data are to be used for validation of an in-house developed numerical model based on the front-tracking method.

  1. Hazard Identification and Risk Assessment in Water Treatment Plant considering Environmental Health and Safety Practice

    NASA Astrophysics Data System (ADS)

    Falakh, Fajrul; Setiani, Onny

    2018-02-01

    Water Treatment Plant (WTP) is an important infrastructure to ensure human health and the environment. In its development, aspects of environmental safety and health are of concern. This paper case study was conducted at the Water Treatment Plant Company in Semarang, Central Java, Indonesia. Hazard identification and risk assessment is one part of the occupational safety and health program at the risk management stage. The purpose of this study was to identify potential hazards using hazard identification methods and risk assessment methods. Risk assessment is done using criteria of severity and probability of accident. The results obtained from this risk assessment are 22 potential hazards present in the water purification process. Extreme categories that exist in the risk assessment are leakage of chlorine and industrial fires. Chlorine and fire leakage gets the highest value because its impact threatens many things, such as industrial disasters that could endanger human life and the environment. Control measures undertaken to avoid potential hazards are to apply the use of personal protective equipment, but management will also be better managed in accordance with hazard control hazards, occupational safety and health programs such as issuing work permits, emergency response training is required, Very useful in overcoming potential hazards that have been determined.

  2. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  3. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  4. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  5. Interaction Between a Steady Detonation Wave in Nitromethane and Geometrical Complex Confinement Defects

    NASA Astrophysics Data System (ADS)

    Crouzet, B.; Soulard, L.; Carion, N.; Manczur, P.

    2007-12-01

    Two copper cylinder expansion tests were carried out on nitromethane. They differ from the classical cylinder test in that the liner includes evenly-spaced protruding circular defects. The aim is to study how a detonation front propagating in the liquid explosive interacts with the confining material defects. The subsequent motion of the metal, accelerated by the expanding detonation products, is measured using a range of diagnostic techniques: electrical probes, a rapid framing camera, a glass block associated with a streak camera and velocity laser interferometers. The different experimental records have been examined in the light of previous classical cylinder test measurements, simple 2D theoretical shock polar analysis results and 2D numerical simulations.

  6. Merry-Go-Round

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Dubbed 'Carousel,' the rock in this image was the target of the Mars Exploration Rover Opportunity science team's outcrop 'scuff test.' The image on the left, taken by the rover's navigation camera on sol 48 of the mission (March 12, 2004), shows the rock pre-scuff. On sol 51 (March 15, 2004), Opportunity slowly rotated its left front wheel on the rock, abrading it in the same way that geology students use a scratch test to determine the hardness of minerals. The image on the right, taken by the rover's navigation camera on sol 51, shows the rock post-scuff. In this image, it is apparent that Opportunity scratched the surface of 'Carousel' and deposited dirt that it was carrying in its wheel rims.

  7. The High Energy Detector of Simbol-X

    NASA Astrophysics Data System (ADS)

    Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Le Mer, I.; Pinsard, F.; Cara, C.; Goetschy, A.; Martignac, J.; Tauzin, G.; Hervé, S.; Laurent, P.; Chipaux, R.; Rio, Y.; Fontignie, J.; Horeau, B.; Authier, M.; Ferrando, P.

    2009-05-01

    The High Energy Detector (HED) is one of the three detection units on board the Simbol-X detector spacecraft. It is placed below the Low Energy Detector so as to collect focused photons in the energy range from 8 to 80 keV. It consists of a mosaic of 64 independent cameras, divided in 8 sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique component. The status of the HED design will be reported. The promising results obtained from the first micro-camera prototypes called Caliste 64 and Caliste 256 will be presented to illustrate the expected performance of the instrument.

  8. A portable W-band radar system for enhancement of infrared vision in fire fighting operations

    NASA Astrophysics Data System (ADS)

    Klenner, Mathias; Zech, Christian; Hülsmann, Axel; Kühn, Jutta; Schlechtweg, Michael; Hahmann, Konstantin; Kleiner, Bernhard; Ulrich, Michael; Ambacher, Oliver

    2016-10-01

    In this paper, we present a millimeter wave radar system which will enhance the performance of infrared cameras used for fire-fighting applications. The radar module is compact and lightweight such that the system can be combined with inertial sensors and integrated in a hand-held infrared camera. This allows for precise distance measurements in harsh environmental conditions, such as tunnel or industrial fires, where optical sensors are unreliable or fail. We discuss the design of the RF front-end, the antenna and a quasi-optical lens for beam shaping as well as signal processing and demonstrate the performance of the system by in situ measurements in a smoke filled environment.

  9. Can we detect, monitor, and characterize volcanic activity using 'off the shelf' webcams and low-light cameras?

    NASA Astrophysics Data System (ADS)

    Harrild, M.; Webley, P. W.; Dehn, J.

    2015-12-01

    The ability to detect and monitor precursory events, thermal signatures, and ongoing volcanic activity in near-realtime is an invaluable tool. Volcanic hazards often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash to aircraft cruise altitudes. Using ground based remote sensing to detect and monitor this activity is essential, but the required equipment is often expensive and difficult to maintain, which increases the risk to public safety and the likelihood of financial impact. Our investigation explores the use of 'off the shelf' cameras, ranging from computer webcams to low-light security cameras, to monitor volcanic incandescent activity in near-realtime. These cameras are ideal as they operate in the visible and near-infrared (NIR) portions of the electromagnetic spectrum, are relatively cheap to purchase, consume little power, are easily replaced, and can provide telemetered, near-realtime data. We focus on the early detection of volcanic activity, using automated scripts that capture streaming online webcam imagery and evaluate each image according to pixel brightness, in order to automatically detect and identify increases in potentially hazardous activity. The cameras used here range in price from 0 to 1,000 and the script is written in Python, an open source programming language, to reduce the overall cost to potential users and increase the accessibility of these tools, particularly in developing nations. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures to be correlated to pixel brightness. Data collected from several volcanoes; (1) Stromboli, Italy (2) Shiveluch, Russia (3) Fuego, Guatemala (4) Popcatépetl, México, along with campaign data from Stromboli (June, 2013), and laboratory tests are presented here.

  10. Volcanic Cloud and Aerosol Monitor (VOLCAM) for Deep Space Gateway

    NASA Astrophysics Data System (ADS)

    Krotkov, N.; Bhartia, P. K.; Torres, O.; Li, C.; Sander, S.; Realmuto, V.; Carn, S.; Herman, J.

    2018-02-01

    We propose complementary ultraviolet (UV) and thermal Infrared (TIR) filter cameras for a dual-purpose whole Earth imaging with complementary natural hazards applications and Earth system science goals.

  11. The Legal Implications of Surveillance Cameras

    ERIC Educational Resources Information Center

    Steketee, Amy M.

    2012-01-01

    The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…

  12. Emergency Assessment of Postfire Debris-Flow Hazards for the 2009 Station Fire, San Gabriel Mountains, Southern California

    USGS Publications Warehouse

    Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.; Staley, Dennis M.; Worstell, Bruce B.

    2009-01-01

    This report presents an emergency assessment of potential debris-flow hazards from basins burned by the 2009 Station fire in Los Angeles County, southern California. Statistical-empirical models developed for postfire debris flows are used to estimate the probability and volume of debris-flow production from 678 drainage basins within the burned area and to generate maps of areas that may be inundated along the San Gabriel mountain front by the estimated volume of material. Debris-flow probabilities and volumes are estimated as combined functions of different measures of basin burned extent, gradient, and material properties in response to both a 3-hour-duration, 1-year-recurrence thunderstorm and to a 12-hour-duration, 2-year recurrence storm. Debris-flow inundation areas are mapped for scenarios where all sediment-retention basins are empty and where the basins are all completely full. This assessment provides critical information for issuing warnings, locating and designing mitigation measures, and planning evacuation timing and routes within the first two winters following the fire. Tributary basins that drain into Pacoima Canyon, Big Tujunga Canyon, Arroyo Seco, West Fork of the San Gabriel River, and Devils Canyon were identified as having probabilities of debris-flow occurrence greater than 80 percent, the potential to produce debris flows with volumes greater than 100,000 m3, and the highest Combined Relative Debris-Flow Hazard Ranking in response to both storms. The predicted high probability and large magnitude of the response to such short-recurrence storms indicates the potential for significant debris-flow impacts to any buildings, roads, bridges, culverts, and reservoirs located both within these drainages and downstream from the burned area. These areas will require appropriate debris-flow mitigation and warning efforts. Probabilities of debris-flow occurrence greater than 80 percent, debris-flow volumes between 10,000 and 100,000 m3, and high Combined Relative Debris-Flow Hazard Rankings were estimated in response to both short recurrence-interval (1- and 2-year) storms for all but the smallest basins along the San Gabriel mountain front between Big Tujunga Canyon and Arroyo Seco. The combination of high probabilities and large magnitudes determined for these basins indicates significant debris-flow hazards for neighborhoods along the mountain front. When the capacity of sediment-retention basins is exceeded, debris flows may be deposited in neighborhoods and streets and impact infrastructure between the mountain front and Foothill Boulevard. In addition, debris flows may be deposited in neighborhoods immediately below unprotected basins. Hazards to neighborhoods and structures at risk from these events will require appropriate debris-flow mitigation and warning efforts.

  13. Decreased pain sensitivity due to trimethylbenzene exposure: case study on quantitative approaches for hazard identification

    EPA Science Inventory

    Traditionally, human health risk assessments have relied on qualitative approaches for hazard identification, often using the Hill criteria and weight of evidence determinations to integrate data from multiple studies. Recently, the National Research Council has recommended the ...

  14. 21 CFR 120.7 - Hazard analysis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Hazard analysis. 120.7 Section 120.7 Food and... hazards. The written hazard analysis shall consist of at least the following: (1) Identification of food..., including food hazards that can occur before, during, and after harvest. The hazard analysis shall be...

  15. 21 CFR 120.7 - Hazard analysis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Hazard analysis. 120.7 Section 120.7 Food and... hazards. The written hazard analysis shall consist of at least the following: (1) Identification of food..., including food hazards that can occur before, during, and after harvest. The hazard analysis shall be...

  16. 21 CFR 120.7 - Hazard analysis.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Hazard analysis. 120.7 Section 120.7 Food and... hazards. The written hazard analysis shall consist of at least the following: (1) Identification of food..., including food hazards that can occur before, during, and after harvest. The hazard analysis shall be...

  17. 21 CFR 120.7 - Hazard analysis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Hazard analysis. 120.7 Section 120.7 Food and... hazards. The written hazard analysis shall consist of at least the following: (1) Identification of food..., including food hazards that can occur before, during, and after harvest. The hazard analysis shall be...

  18. Time-resolved spectral investigations of laser light induced microplasma

    NASA Astrophysics Data System (ADS)

    Nánai, L.; Hevesi, I.

    1992-01-01

    The dynamical and spectral properties of an optical breakdown microplasma created by pulses of different lasers on surfaces of insulators (KCI), metals (Cu) and semiconductors (V 2O 5), have been investigated. Experiments were carried out in air and vacuum using different wavelengths (λ = 0.694μm, type OGM-20,λ = 1.06μm with a home-made laser based on neodymium glass crystal, and λ = 10.6μm, similarly home-made) and pulse durations (Q-switched and free-running regimes). To follow the integral, dynamical and spectral characteristics of the luminous spot of microplasma we have used fast cameras (SFR-2M, IMACON-HADLAND), a high speed spectral camera (AGAT-2) and a spectrograph (STE-1). It has been shown that the microplasma consists of two parts: fast front (peak) with τ≈100 ns and slow front (tail) with τ≈1μs durations. The detonation front speed is of the order of ≈10 5 cm s -1 and follows the temporal dependence of to t0.4. It depends on the composition of the surrounding gas and its pressure and could be connected with quick evaporation of the material investigated (peak) and optical breakdown of the ambient gaseous atmosphere (tail). From the delay in appearance of different characteristic spectral lines of the target material and its gaseous surrounding we have shown that the evolution of the microplasma involves evaporation and ionization of the atoms of the parent material followed by optical breakdown due to the incident and absorbed laser light, together with microplasma expansion.

  19. An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert

    2015-09-01

    Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.

  20. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  1. Recognition and Matching of Clustered Mature Litchi Fruits Using Binocular Charge-Coupled Device (CCD) Color Cameras

    PubMed Central

    Wang, Chenglin; Tang, Yunchao; Zou, Xiangjun; Luo, Lufeng; Chen, Xiong

    2017-01-01

    Recognition and matching of litchi fruits are critical steps for litchi harvesting robots to successfully grasp litchi. However, due to the randomness of litchi growth, such as clustered growth with uncertain number of fruits and random occlusion by leaves, branches and other fruits, the recognition and matching of the fruit become a challenge. Therefore, this study firstly defined mature litchi fruit as three clustered categories. Then an approach for recognition and matching of clustered mature litchi fruit was developed based on litchi color images acquired by binocular charge-coupled device (CCD) color cameras. The approach mainly included three steps: (1) calibration of binocular color cameras and litchi image acquisition; (2) segmentation of litchi fruits using four kinds of supervised classifiers, and recognition of the pre-defined categories of clustered litchi fruit using a pixel threshold method; and (3) matching the recognized clustered fruit using a geometric center-based matching method. The experimental results showed that the proposed recognition method could be robust against the influences of varying illumination and occlusion conditions, and precisely recognize clustered litchi fruit. In the tested 432 clustered litchi fruits, the highest and lowest average recognition rates were 94.17% and 92.00% under sunny back-lighting and partial occlusion, and sunny front-lighting and non-occlusion conditions, respectively. From 50 pairs of tested images, the highest and lowest matching success rates were 97.37% and 91.96% under sunny back-lighting and non-occlusion, and sunny front-lighting and partial occlusion conditions, respectively. PMID:29112177

  2. Synchro-ballistic recording of detonation phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchfield, R.R.; Asay, B.W.; Bdzil, J.B.

    1997-09-01

    Synchro-ballistic use of rotating-mirror streak cameras allows for detailed recording of high-speed events of known velocity and direction. After an introduction to the synchro-ballistic technique, this paper details two diverse applications of the technique as applied in the field of high-explosives research. In the first series of experiments detonation-front shape is recorded as the arriving detonation shock wave tilts an obliquely mounted mirror, causing reflected light to be deflected from the imaging lens. These tests were conducted for the purpose of calibrating and confirming the asymptotic Detonation Shock Dynamics (DSD) theory of Bdzil and Stewart. The phase velocities of themore » events range from ten to thirty millimeters per microsecond. Optical magnification is set for optimal use of the film`s spatial dimension and the phase velocity is adjusted to provide synchronization at the camera`s maximum writing speed. Initial calibration of the technique is undertaken using a cylindrical HE geometry over a range of charge diameters and of sufficient length-to-diameter ratio to insure a stable detonation wave. The final experiment utilizes an arc-shaped explosive charge, resulting in an asymmetric detonation-front record. The second series of experiments consists of photographing a shaped-charge jet having a velocity range of two to nine millimeters per microsecond. To accommodate the range of velocities it is necessary to fire several tests, each synchronized to a different section of the jet. The experimental apparatus consists of a vacuum chamber to preclude atmospheric ablation of the jet tip with shocked-argon back lighting to produce a shadow-graph image.« less

  3. A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.

    2003-06-01

    The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less

  4. 40 CFR 270.25 - Specific part B information requirements for equipment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... requirements for equipment. 270.25 Section 270.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... subpart BB of part 264 applies: (1) Equipment identification number and hazardous waste management unit identification. (2) Approximate locations within the facility (e.g., identify the hazardous waste management unit...

  5. CHARACTERIZATION OF A LOSS OF HETEROZYGOSITY CANCER HAZARD IDENTIFICATION ASSAY.

    EPA Science Inventory

    Tumor development generally requires the loss of heterozygosity (LOH) at one or more loci. Thus, the ability to determine whether a chemical is capable of causing LOH is an important part of cancer hazard identification. The mouse lymphoma assay detects a broad spectrum of geneti...

  6. Structured Light-Based Hazard Detection For Planetary Surface Navigation

    NASA Technical Reports Server (NTRS)

    Nefian, Ara; Wong, Uland Y.; Dille, Michael; Bouyssounouse, Xavier; Edwards, Laurence; To, Vinh; Deans, Matthew; Fong, Terry

    2017-01-01

    This paper describes a structured light-based sensor for hazard avoidance in planetary environments. The system presented here can also be used in terrestrial applications constrained by reduced onboard power and computational complexity and low illumination conditions. The sensor is on a calibrated camera and laser dot projector system. The onboard hazard avoidance system determines the position of the projected dots in the image and through a triangulation process detects potential hazards. The paper presents the design parameters for this sensor and describes the image based solution for hazard avoidance. The system presented here was tested extensively in day and night conditions in Lunar analogue environments. The current system achieves over 97 detection rate with 1.7 false alarms over 2000 images.

  7. Identification and assessment of hazardous compounds in drinking water.

    PubMed

    Fawell, J K; Fielding, M

    1985-12-01

    The identification of organic chemicals in drinking water and their assessment in terms of potential hazardous effects are two very different but closely associated tasks. In relation to both continuous low-level background contamination and specific, often high-level, contamination due to pollution incidents, the identification of contaminants is a pre-requisite to evaluation of significant hazards. Even in the case of the rapidly developing short-term bio-assays which are applied to water to indicate a potential genotoxic hazard (for example Ames tests), identification of the active chemicals is becoming a major factor in the further assessment of the response. Techniques for the identification of low concentrations of organic chemicals in drinking water have developed remarkably since the early 1970s and methods based upon gas chromatography-mass spectrometry (GC-MS) have revolutionised qualitative analysis of water. Such techniques are limited to "volatile" chemicals and these usually constitute a small fraction of the total organic material in water. However, in recent years there have been promising developments in techniques for "non-volatile" chemicals in water. Such techniques include combined high-performance liquid chromatography-mass spectrometry (HPLC-MS) and a variety of MS methods, involving, for example, field desorption, fast atom bombardment and thermospray ionisation techniques. In the paper identification techniques in general are reviewed and likely future developments outlined. The assessment of hazards associated with chemicals identified in drinking and related waters usually centres upon toxicology - an applied science which involves numerous disciplines. The paper examines the toxicological information needed, the quality and deployment of such information and discusses future research needs. Application of short-term bio-assays to drinking water is a developing area and one which is closely involved with, and to some extent dependent on, powerful methods of identification. Recent developments are discussed.

  8. Spectral karyotyping (SKY) analysis of heritable effects of radiation-induced malignant transformation

    NASA Astrophysics Data System (ADS)

    Zitzelsberger, Horst; Fung, Jingly; Janish, C.; McNamara, George; Bryant, P. E.; Riches, A. C.; Weier, Heinz-Ulli G.

    1999-05-01

    Radiocarcinogenesis is widely recognized as occupational, environmental and therapeutical hazard, but the underlying mechanisms and cellular targets have not yet been identified. We applied SKY to study chromosomal rearrangements leading to malignant transformation of irradiated thyroid epithelial cells. SKY is a recently developed technique to detect translocations involving non-homologous based on unique staining of all 24 human chromosomes by hybridization with a mixture of whole chromosome painting probes. A tuneable interferometer mounted on a fluorescence microscope in front of a CCD camera allows to record the 400 nm - 1000 nm fluorescence spectrum for each pixel in the image. After background correction, spectra recorded for each pixel are compared to reference spectra stored previously for each chromosome-specific probe. Thus, pixel spectra can be associated with specific chromosomes and displayed in 'classification' colors, which are defined so that even small translocations become readily discernible. SKY analysis was performed on several radiation-transformed cell lines. Line S48T was generated from a primary tumor of a child exposed to elevated levels of radiation following the Chernobyl nuclear accident. Subclones were generated from the human thyroid epithelial cell line (HTori-3) by exposure to gamma or alpha irradiation. SKY analysis revealed multiple translocations and, combined with G-banding, allowed the definition of targets for positional cloning of tumor related genes.

  9. Off-Nominal Performance of the International Space Station Solar Array Wings Under Orbital Eclipse Lighting Scenarios

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Scheiman, David A.

    2005-01-01

    This paper documents testing and analyses to quantify International Space Station (ISS) Solar Array Wing (SAW) string electrical performance under highly off-nominal, low-temperature-low-intensity (LILT) operating conditions with nonsolar light sources. This work is relevant for assessing feasibility and risks associated with a Sequential Shunt Unit (SSU) remove and replace (R&R) Extravehicular Activity (EVA). During eclipse, SAW strings can be energized by moonlight, EVA suit helmet lights or video camera lights. To quantify SAW performance under these off-nominal conditions, solar cell performance testing was performed using full moon, solar simulator and Video Camera Luminaire (VCL) light sources. Test conditions included 25 to 110 C temperatures and 1- to 0.0001-Sun illumination intensities. Electrical performance data and calculated eclipse lighting intensities were combined to predict SAW current-voltage output for comparison with electrical hazard thresholds. Worst case predictions show there is no connector pin molten metal hazard but crew shock hazard limits are exceeded due to VCL illumination. Assessment uncertainties and limitations are discussed along with operational solutions to mitigate SAW electrical hazards from VCL illumination. Results from a preliminary assessment of SAW arcing are also discussed. The authors recommend further analyses once SSU, R&R, and EVA procedures are better defined.

  10. EPCRA Tier II Emergency and Hazardous Chemical Inventory Form

    EPA Pesticide Factsheets

    Required for Emergency and Hazardous Chemical Inventory reporting. Must provide facility identification, chemical description, indication of physical and health hazards, inventory information, and storage details.

  11. 78 FR 743 - Changes in Flood Hazard Determinations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-04

    ... Board of RegionX.aspx. Commissioners, 200 West Front Street, Boise, ID 83702. Illinois: Cook Village of...). Thomas M. Avenue, Hammond, team.com/starr/ McDermott, Jr., IN 46320. LOMR/Pages/ Mayor, City of RegionV...

  12. Predation by Red Foxes (Vulpes vulpes) at an Outdoor Piggery.

    PubMed

    Fleming, Patricia A; Dundas, Shannon J; Lau, Yvonne Y W; Pluske, John R

    2016-10-08

    Outdoor pig operations are an alternative to intensive systems of raising pigs; however for the majority of outdoor pork producers, issues of biosecurity and predation control require significant management and (or) capital investment. Identifying and quantifying predation risk in outdoor pork operations has rarely been done, but such data would be informative for these producers as part of their financial and logistical planning. We quantified potential impact of fox predation on piglets bred on an outdoor pork operation in south-western Australia. We used remote sensor cameras at select sites across the farm as well as above farrowing huts to record interactions between predators and pigs (sows and piglets). We also identified animal losses from breeding records, calculating weaning rate as a proportion of piglets born. Although only few piglets were recorded lost to fox predation (recorded by piggery staff as carcasses that are "chewed"), it is likely that foxes were contributing substantially to the 20% of piglets that were reported "missing". Both sets of cameras recorded a high incidence of fox activity; foxes appeared on camera soon after staff left for the day, were observed tracking and taking live piglets (despite the presence of sows), and removed dead carcasses from in front of the cameras. Newly born and younger piglets appeared to be the most vulnerable, especially when they are born out in the paddock, but older piglets were also lost. A significant ( p = 0.001) effect of individual sow identification on the weaning rate, but no effect of sow age (parity), suggests that individual sow behavior towards predators influences predation risk for litters. We tracked the movement of piglet carcasses by foxes, and confirmed that foxes make use of patches of native vegetation for cover, although there was no effect of paddock, distance to vegetation, or position on the farm on weaning rate. Trials with non-toxic baits reveal high levels of non-target bait interference. Other management options are recommended, including removing hay from the paddocks to reduce the risks of sows farrowing in open paddocks, and covering or predator-proof fencing the pig carcass pit. Results of this study will have increasing relevance for the expanding outdoor/free-range pork industry, contributing to best practice guidelines for predator control.

  13. PBF Cooling Tower under construction. Cold water basin is five ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    PBF Cooling Tower under construction. Cold water basin is five feet deep. Foundation and basin walls are reinforced concrete. Camera facing west. Pipe openings through wall in front are outlets for return flow of cool water to reactor building. Photographer: John Capek. Date: September 4, 1968. INEEL negative no. 68-3473 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID

  14. STS-41 crewmembers pose on OV-103's middeck for inflight (in-space) portrait

    NASA Image and Video Library

    1990-10-10

    STS041-26-007 (6-10 Oct 1990) --- A 35mm preset camera on Discovery's middeck captures the traditional in-space portrait of the STS-41 crewmembers. In front are (l.-r.) Astronauts Richard N. Richards, mission commander; and Robert D. Cabana, pilot. In the rear are (l.-r.) Astronauts Thomas D. Akers, Bruce E. Melnick and William M. Shepherd.

  15. A&M. Radioactive parts security storage warehouses: TAN648 on left, and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A&M. Radioactive parts security storage warehouses: TAN-648 on left, and dolly storage building, TAN-647, on right. Camera facing south. This was the front entry for the warehouse and the rear of the dolly storage building. Date: August 6, 2003. INEEL negative no. HD-36-2-2 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  16. Opportunity Egress Aid Contacts Soil

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This image from the navigation camera on the Mars Exploration Rover Opportunity shows the rover's egress aid touching the martian soil at Meridiani Planum, Mars. The image was taken after the rear lander petal hyperextended in a manuever to tilt the lander forward. The maneuver pushed the front edge lower, placing the tips of the egress aids in the soil. The rover will drive straight ahead to exit the lander.

  17. Voss videotapes the STS-105 crewmembers in the U.S. Laboratory

    NASA Image and Video Library

    2001-08-17

    ISS003-E-5188 (17 August 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, photographs astronauts Scott J. Horowitz (front left), STS-105 mission commander, Frederick W. (Rick) Sturckow, pilot, Daniel T. Barry (back left), and Patrick G. Forrester, both mission specialists, in the Destiny laboratory on the International Space Station (ISS). This image was taken with a digital still camera.

  18. Gulf of Antalya, Southern Turkish Coastline

    NASA Image and Video Library

    1984-10-13

    41G-120-053 (5-13 Oct. 1984) --- Turkey and a portion of the Mediterranean Sea, with the city of Antalya visible, were photographed with a medium format camera during the 41-G mission aboard the space shuttle Challenger. Numerous eddies and an ocean front can be observed in the sun's glint off the water's surface. The folded mountains indicate the rugged topography in this region. Photo credit: NASA

  19. STS-30 crewmembers pose for onboard portrait on OV-104's aft flight deck

    NASA Image and Video Library

    1989-05-08

    STS030-21-008 (4-8 May 1989) --- A traditional in-space crew portrait for STS-30 aboard the Atlantis. Astronaut Mary L. Cleave is in front. Others pictured, left to right, are astronauts Norman E. Thagard, Ronald J. Grabe, David M. Walker and Mark C. Lee. An automatic, pre-set 35mm camera using color negative film recorded the scene.

  20. Transformation optics with windows

    NASA Astrophysics Data System (ADS)

    Oxburgh, Stephen; White, Chris D.; Antoniou, Georgios; Orife, Ejovbokoghene; Courtial, Johannes

    2014-09-01

    Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.

  1. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F. M.; Jorge, Jorge M.

    1998-01-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  2. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F.; Jorge, Jorge M.

    1997-12-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  3. Detection, identification, and quantification techniques for spills of hazardous chemicals

    NASA Technical Reports Server (NTRS)

    Washburn, J. F.; Sandness, G. A.

    1977-01-01

    The first 400 chemicals listed in the Coast Guard's Chemical Hazards Response Information System were evaluated with respect to their detectability, identifiability, and quantifiability by 12 generalized remote and in situ sensing techniques. Identification was also attempted for some key areas in water pollution sensing technology.

  4. 44 CFR 65.16 - Standard Flood Hazard Determination Form and Instructions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Standard Flood Hazard... MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND MAPPING OF SPECIAL HAZARD AREAS § 65.16 Standard Flood Hazard Determination...

  5. 40 CFR 261.10 - Criteria for identifying the characteristics of hazardous waste.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... characteristics of hazardous waste. 261.10 Section 261.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Criteria for Identifying the Characteristics of Hazardous Waste and for Listing Hazardous Waste § 261.10 Criteria for...

  6. 40 CFR 261.10 - Criteria for identifying the characteristics of hazardous waste.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... characteristics of hazardous waste. 261.10 Section 261.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Criteria for Identifying the Characteristics of Hazardous Waste and for Listing Hazardous Waste § 261.10 Criteria for...

  7. 40 CFR 261.10 - Criteria for identifying the characteristics of hazardous waste.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... characteristics of hazardous waste. 261.10 Section 261.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Criteria for Identifying the Characteristics of Hazardous Waste and for Listing Hazardous Waste § 261.10 Criteria for...

  8. 40 CFR 261.10 - Criteria for identifying the characteristics of hazardous waste.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... characteristics of hazardous waste. 261.10 Section 261.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Criteria for Identifying the Characteristics of Hazardous Waste and for Listing Hazardous Waste § 261.10 Criteria for...

  9. 40 CFR 261.10 - Criteria for identifying the characteristics of hazardous waste.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... characteristics of hazardous waste. 261.10 Section 261.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Criteria for Identifying the Characteristics of Hazardous Waste and for Listing Hazardous Waste § 261.10 Criteria for...

  10. Identification of handwriting by using the genetic algorithm (GA) and support vector machine (SVM)

    NASA Astrophysics Data System (ADS)

    Zhang, Qigui; Deng, Kai

    2016-12-01

    As portable digital camera and a camera phone comes more and more popular, and equally pressing is meeting the requirements of people to shoot at any time, to identify and storage handwritten character. In this paper, genetic algorithm(GA) and support vector machine(SVM)are used for identification of handwriting. Compare with parameters-optimized method, this technique overcomes two defects: first, it's easy to trap in the local optimum; second, finding the best parameters in the larger range will affects the efficiency of classification and prediction. As the experimental results suggest, GA-SVM has a higher recognition rate.

  11. Forensic use of photo response non-uniformity of imaging sensors and a counter method.

    PubMed

    Dirik, Ahmet Emir; Karaküçük, Ahmet

    2014-01-13

    Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.

  12. Integrating motion-detection cameras and hair snags for wolverine identification

    Treesearch

    Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg

    2011-01-01

    We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...

  13. A reference estimator based on composite sensor pattern noise for source device identification

    NASA Astrophysics Data System (ADS)

    Li, Ruizhe; Li, Chang-Tsun; Guan, Yu

    2014-02-01

    It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.

  14. On the Importance of "Front-Side Mechanics" in Athletics Sprinting.

    PubMed

    Haugen, Thomas; Danielsen, Jørgen; Alnes, Leif Olav; McGhie, David; Sandbakk, Øyvind; Ettema, Gertjan

    2018-05-16

    Practitioners have, for many years, argued that athletic sprinters should optimize front-side mechanics (leg motions occurring in front of the extended line through the torso) and minimize back-side mechanics. This study aimed to investigate if variables related to front- and back-side mechanics can be distinguished from other previously highlighted kinematic variables (spatiotemporal variables and variables related to segment configuration and velocities at touchdown) in how they statistically predict performance. A total of 24 competitive sprinters (age: 23.1 [3.4] y, height: 1.81 [0.06] m, body mass: 75.7 [5.6] kg, and 100-m personal best: 10.86 [0.22] s) performed two 20-m starts from block and 2 to 3 flying sprints over 20 m. Kinematics were recorded in 3D using a motion tracking system with 21 cameras at a 250 Hz sampling rate. Several front- and back-side variables, including thigh (r = .64) and knee angle (r = .51) at lift-off and maximal thigh extension (r = .66), were largely correlated (P < .05) with accelerated running performance, and these variables displayed significantly higher correlations (P < .05) to accelerated running performance than nearly all the other analyzed variables. However, the relationship directions for most front- and back-side variables during accelerated running were opposite in comparison to how the theoretical concept has been described. Horizontal ankle velocity, contact time, and step rate displayed significantly higher correlation values to maximal velocity sprinting than the other variables (P < .05), and neither of the included front- and back-side variables were significantly associated with maximal velocity sprinting. Overall, the present findings did not support that front-side mechanics were crucial for sprint performance among the investigated sprinters.

  15. Sensor noise camera identification: countering counter-forensics

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica; Chen, Mo

    2010-01-01

    In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.

  16. 40 CFR 62.11601 - Identification of plan.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Identification of sources. The plan includes the following sulfuric acid plants: Allied Chemical, Hopewell Allied Chemical, Front Royal Du Pont, James River Smith Douglas, Chesapeake U.S. Army Ammo Plant, Radford Weaver Fertilizer, Norfolk (e) A variance issued to the E. I. du Pont de Nemours and Company James River Sulfuric...

  17. 40 CFR 62.11601 - Identification of plan.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Identification of sources. The plan includes the following sulfuric acid plants: Allied Chemical, Hopewell Allied Chemical, Front Royal Du Pont, James River Smith Douglas, Chesapeake U.S. Army Ammo Plant, Radford Weaver Fertilizer, Norfolk (e) A variance issued to the E. I. du Pont de Nemours and Company James River Sulfuric...

  18. 40 CFR 62.11601 - Identification of plan.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Identification of sources. The plan includes the following sulfuric acid plants: Allied Chemical, Hopewell Allied Chemical, Front Royal Du Pont, James River Smith Douglas, Chesapeake U.S. Army Ammo Plant, Radford Weaver Fertilizer, Norfolk (e) A variance issued to the E. I. du Pont de Nemours and Company James River Sulfuric...

  19. 40 CFR 62.11601 - Identification of plan.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Identification of sources. The plan includes the following sulfuric acid plants: Allied Chemical, Hopewell Allied Chemical, Front Royal Du Pont, James River Smith Douglas, Chesapeake U.S. Army Ammo Plant, Radford Weaver Fertilizer, Norfolk (e) A variance issued to the E. I. du Pont de Nemours and Company James River Sulfuric...

  20. 40 CFR 62.11601 - Identification of plan.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Identification of sources. The plan includes the following sulfuric acid plants: Allied Chemical, Hopewell Allied Chemical, Front Royal Du Pont, James River Smith Douglas, Chesapeake U.S. Army Ammo Plant, Radford Weaver Fertilizer, Norfolk (e) A variance issued to the E. I. du Pont de Nemours and Company James River Sulfuric...

  1. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  2. Optimising Camera Traps for Monitoring Small Mammals

    PubMed Central

    Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

  3. Rupture propagation behavior and the largest possible earthquake induced by fluid injection into deep reservoirs

    NASA Astrophysics Data System (ADS)

    Gischig, Valentin S.

    2015-09-01

    Earthquakes caused by fluid injection into deep underground reservoirs constitute an increasingly recognized risk to populations and infrastructure. Quantitative assessment of induced seismic hazard, however, requires estimating the maximum possible magnitude earthquake that may be induced during fluid injection. Here I seek constraints on an upper limit for the largest possible earthquake using source-physics simulations that consider rate-and-state friction and hydromechanical interaction along a straight homogeneous fault. Depending on the orientation of the pressurized fault in the ambient stress field, different rupture behaviors can occur: (1) uncontrolled rupture-front propagation beyond the pressure front or (2) rupture-front propagation arresting at the pressure front. In the first case, fault properties determine the earthquake magnitude, and the upper magnitude limit may be similar to natural earthquakes. In the second case, the maximum magnitude can be controlled by carefully designing and monitoring injection and thus restricting the pressurized fault area.

  4. Minimalist identification system based on venous map for security applications

    NASA Astrophysics Data System (ADS)

    Jacinto G., Edwar; Martínez S., Fredy; Martínez S., Fernando

    2015-07-01

    This paper proposes a technique and an algorithm used to build a device for people identification through the processing of a low resolution camera image. The infrared channel is the only information needed, sensing the blood reaction with the proper wave length, and getting a preliminary snapshot of the vascular map of the back side of the hand. The software uses this information to extract the characteristics of the user in a limited area (region of interest, ROI), unique for each user, which applicable to biometric access control devices. This kind of recognition prototypes functions are expensive, but in this case (minimalist design), the biometric equipment only used a low cost camera and the matrix of IR emitters adaptation to construct an economic and versatile prototype, without neglecting the high level of effectiveness that characterizes this kind of identification method.

  5. STS-27 crew poses for inflight portrait on forward flight deck with football

    NASA Technical Reports Server (NTRS)

    1988-01-01

    With WILSON NFL football freefloating in front of them, STS-27 astronauts pose on Atlantis', Orbiter Vehicle (OV) 104's, forward flight deck for inflight crew portrait. Crewmembers, wearing blue mission t-shirts, are (left to right) Commander Robert L. Gibson, Mission Specialist (MS) Richard M. Mullane, MS Jerry L. Ross, MS William M. Shepherd, and Pilot Guy S. Gardner. Forward flight deck overhead control panels are visible above crewmembers, commanders and pilots seats in front of them, and forward windows behind them. An auto-set 35mm camera mounted on the aft flight deck was used to take this photo. The football was later presented to the National Football League (NFL) at halftime of the Super Bowl in Miami.

  6. Dust Storm, Sahara Desert, Algeria/Niger Border, Africa

    NASA Image and Video Library

    1992-05-16

    STS049-92-071 (13 May 1992) --- The STS-49 crew aboard the Earth-orbiting Space Shuttle Endeavour captured this Saharan dust storm on the Algeria-Niger border. The south-looking, late-afternoon view shows one of the best examples in the Shuttle photo data base of a dust storm. A series of gust fronts, caused by dissipating thunderstorms have picked up dust along the outflow boundaries. Small cumulus clouds have formed over the most vigorously ascending parts of the dust front, enhancing the visual effect of the front. The storm is moving roughly north-northwest, at right angles to the most typical path for dust storms in this part of the Sahara (shown by lines of sand on the desert surface in the foreground). Storms such as this can move out into the Atlantic, bringing dust even as far as the Americas on some occasions. A crewmember used a 70mm handheld Hasselblad camera with a 100mm lens to record the frame.

  7. Failure and penetration response of borosilicate glass during short-rod impact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, C. E. Jr.; Orphal, D. L.; Behner, Th.

    2007-12-12

    The failure characterization of brittle materials like glass is of fundamental importance in describing the penetration resistance against projectiles. A critical question is whether this failure front remains 'steady' after the driving stress is removed. A test series with short gold rods (D = 1 mm, L/D{approx_equal}5-11) impacting borosilicate glass at {approx}1 to 2 km/s was carried out to investigate this question. The reverse ballistic method was used for the experiments, and the impact and penetration process was observed simultaneously with five flash X-rays and a 16-frame high-speed optical camera. Very high measurement accuracy was established to ensure reliable results.more » Results show that the failure front induced by rod impact and penetration does arrest (ceases to propagate) after the rod is totally eroded inside the glass. The impact of a second rod after a short time delay reinitiates the failure front at about the same speed.« less

  8. Measurement of Front Curvature and Detonation Velocity for a Nonideal Heterogeneous Explosive in Axisymmetric and Two-Dimensional Geometries

    NASA Astrophysics Data System (ADS)

    Higgins, Andrew

    2009-06-01

    Detonation in a heterogeneous explosive with a relatively sparse concentration of reaction centers (``hot spots'') is investigated experimentally. The explosive system considered is nitromethane gelled with PMMA and with glass microballoons (GMB's) in suspension. The detonation velocity is measured as a function of the characteristic charge dimension (diameter or thickness) in both axisymmetric and two-dimensional planar geometries. The use of a unique, annular charge geometry (with the diameter of the annulus much greater than the annular gap thickness) permits quasi-two-dimensional detonations to be observed without undesirable lateral rarefactions that result from a finite aspect ratio. The detonation front curvature is also measured directly using an electronic streak camera. The results confirm the prior findings of Gois et al. (1996) which showed that, for a low concentration of GMB's, detonation propagation does not exhibit the expected 2:1 scaling from axisymmetric to planar geometries. This reinforces the idea that detonation in highly nonideal explosives is not governed exclusively by front curvature.

  9. THE MARS ORBITER CAMERA IS INSTALLED ON THE MARS GLOBAL SURVEYOR

    NASA Technical Reports Server (NTRS)

    1996-01-01

    In the Payload Hazardous Servicing Facility at KSC, installation is under way of the Mars Orbiter Camera (MOC) on the Mars Global Surveyor spacecraft. The MOC is one of a suite of six scientific instruments that will gather data during a two-year period about Martian topography, mineral distribution and weather. The Mars Global Surveyor is slated for launch aboard a Delta II expendable launch vehicle on November 6, the beginning of a 20-day launch period.

  10. The Road To The Objective Force. Armaments for the Army Transformation

    DTIC Science & Technology

    2001-06-18

    Vehicle Fire Support Vehicle •TOW 2B Anti-Tank Capability Under Armor •Detection of NBC Hazards Mortar Carrier •Dismounted M121 120mm MRT Initially...engaged from under armor M6 Launchers (x4) Staring Array Thermal Sight Height reduction for air transport Day Camera Target Acq Sight Armament Remote...PM BCT ANTI-TANK GUIDED MISSILE VEHICLE • TOWII • ITAS (Raytheon) - 2 Missiles • IBAS Day Camera • Missile is Remotely Fired Under Armor • M6 Smoke

  11. Indirect Vision Driving with Fixed Flat Panel Displays for Near Unity, Wide, and Extended Fields of Camera View

    DTIC Science & Technology

    2001-06-01

    The choice of camera FOV may depend on the task being performed. The driver may prefer a unity view for driving along a known route to...increase his or her perception of potential road hazards. On the other hand, the driver may prefer a compressed image at road turns for route selection...with a supervisory evaluation of the road ahead and the impact on the driving schema. Included in this

  12. 49 CFR 383.153 - Information on the document and application.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... hazardous materials endorsements; (vi) S for school bus; and (vii) At the discretion of the State... explained on the front or back of the CDL document. (b) If the CDL is a Nonresident CDL, it shall contain...

  13. Optimization design of periscope type 3X zoom lens design for a five megapixel cellphone camera

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Tien, Chuen-Lin; Pan, Jui-Wen; Chao, Yu-Hao; Chu, Pu-Yi

    2016-11-01

    This paper presents a periscope type 3X zoom lenses design for a five megapixel cellphone camera. The configuration of optical system uses the right angle prism in front of the zoom lenses to change the optical path rotated by a 90° angle resulting in the zoom lenses length of 6 mm. The zoom lenses can be embedded in mobile phone with a thickness of 6 mm. The zoom lenses have three groups with six elements. The half field of view is varied from 30° to 10.89°, the effective focal length is adjusted from 3.142 mm to 9.426 mm, and the F-number is changed from 2.8 to 5.13.

  14. Advanced Wavefront Sensing and Control Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Shi, Fang; Basinger, Scott A.; Diaz, Rosemary T.; Gappinger, Robert O.; Tang, Hong; Lam, Raymond K.; Sidick, Erkin; Hein, Randall C.; Rud, Mayer; Troy, Mitchell

    2010-01-01

    The Advanced Wavefront Sensing and Control Testbed (AWCT) is built as a versatile facility for developing and demonstrating, in hardware, the future technologies of wave front sensing and control algorithms for active optical systems. The testbed includes a source projector for a broadband point-source and a suite of extended scene targets, a dispersed fringe sensor, a Shack-Hartmann camera, and an imaging camera capable of phase retrieval wavefront sensing. The testbed also provides two easily accessible conjugated pupil planes which can accommodate the active optical devices such as fast steering mirror, deformable mirror, and segmented mirrors. In this paper, we describe the testbed optical design, testbed configurations and capabilities, as well as the initial results from the testbed hardware integrations and tests.

  15. 40 CFR 241.3 - Standards and procedures for identification of non-hazardous secondary materials that are solid...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... combustion units. 241.3 Section 241.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOLID WASTES USED AS FUELS OR INGREDIENTS IN COMBUSTION UNITS Identification of Non-Hazardous Secondary Materials That Are Solid Wastes When Used as Fuels or Ingredients in Combustion Units...

  16. 40 CFR 241.3 - Standards and procedures for identification of non-hazardous secondary materials that are solid...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... combustion units. 241.3 Section 241.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOLID WASTES USED AS FUELS OR INGREDIENTS IN COMBUSTION UNITS Identification of Non-Hazardous Secondary Materials That Are Solid Wastes When Used as Fuels or Ingredients in Combustion Units...

  17. 40 CFR 241.3 - Standards and procedures for identification of non-hazardous secondary materials that are solid...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... combustion units. 241.3 Section 241.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES SOLID WASTES USED AS FUELS OR INGREDIENTS IN COMBUSTION UNITS Identification of Non-Hazardous Secondary Materials That Are Solid Wastes When Used as Fuels or Ingredients in Combustion Units...

  18. The activity of autumn meteor showers in 2006-2008

    NASA Astrophysics Data System (ADS)

    Kartashova, Anna

    2015-03-01

    The purpose of meteor observations in INASAN is the study of meteor showers, as the elements of the migrant substance of the Solar System, and estimation of risk of hazardous collisions of spacecrafts with the particles of streams. Therefore we need to analyze the meteor events with brightness of up to 8 m, which stay in meteoroid streams for a long time and can be a hazardous for the spacecraft. The results of our single station TV observations of autumn meteor showers for the period from 2006 to 2008 are presented. The high-sensitive hybrid camera (the system with coupled of the Image Intensifier) FAVOR with limiting magnitude for meteors about 9m. . .10m in the field of view 20 × 18 was used for observations. In 2006-2008 from October to November more than 3 thousand of meteors were detected, 65% from them have the brightness from 6m to 9m. The identification with autumn meteor showers (Orionids, Taurids, Draconids, Leonids) was carried out. In order to estimate the density of the influx of meteor matter to the Earth for these meteor showers the Index of meteor activity (IMA) was calculated. The IMA distribution for the period 2006 - 2008 is given. The distributions of autumn meteor showers (the meteors with brightness of up to 8 m) by stellar magnitude from 2006 to 2008 are also presented.

  19. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  20. 10 CFR 851.21 - Hazard identification and assessment.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety workplace hazards through appropriate workplace monitoring; (2) Document assessment for chemical, physical... hazards; (6) Perform routine job activity-level hazard analyses; (7) Review site safety and health...

  1. 10 CFR 851.21 - Hazard identification and assessment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety workplace hazards through appropriate workplace monitoring; (2) Document assessment for chemical, physical... hazards; (6) Perform routine job activity-level hazard analyses; (7) Review site safety and health...

  2. Dynamically reconfigurable holographic metasurface aperture for a Mills-Cross monochromatic microwave camera.

    PubMed

    Yurduseven, Okan; Marks, Daniel L; Fromenteze, Thomas; Smith, David R

    2018-03-05

    We present a reconfigurable, dynamic beam steering holographic metasurface aperture to synthesize a microwave camera at K-band frequencies. The aperture consists of a 1D printed microstrip transmission line with the front surface patterned into an array of slot-shaped subwavelength metamaterial elements (or meta-elements) dynamically tuned between "ON" and "OFF" states using PIN diodes. The proposed aperture synthesizes a desired radiation pattern by converting the waveguide-mode to a free space radiation by means of a binary modulation scheme. This is achieved in a holographic manner; by interacting the waveguide-mode (reference-wave) with the metasurface layer (hologram layer). It is shown by means of full-wave simulations that using the developed metasurface aperture, the radiated wavefronts can be engineered in an all-electronic manner without the need for complex phase-shifting circuits or mechanical scanning apparatus. Using the dynamic beam steering capability of the developed antenna, we synthesize a Mills-Cross composite aperture, forming a single-frequency all-electronic microwave camera.

  3. First Results from the Wide Angle Camera of the ROSETTA Mission .

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Bertini, I.; Angrilli, F.; Bianchini, G. A.; Debei, S.; De Cecco, M.; Parzianello, G.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.

    This paper gives a brief description of the Wide Angle Camera (WAC), built by the Center of Studies and Activities for Space (CISAS) of the University of Padova for the ESA ROSETTA Mission, of data we have obtained about the new mission targets, and of the first results achieved after the launch in March 2004. The WAC is part of the OSIRIS imaging system, built under the PI-ship of Dr. U. Keller (Max-Planck-Institute for Solar System Studies) which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front door mechanism for the NAC. The images show the excellent optical quality of the WAC, exceeding the specifications both in term of encircled energy (80% in one pixel over a FoV of 12×12 sq degree), limiting magnitude (fainter than the 13th in 30s exposure time through a wideband red filter) and amount of distortions.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranson, W.F.; Schaeffel, J.A.; Murphree, E.A.

    The response of prestressed and preheated plates subject to an exponentially decaying blast load was experimentally determined. A grid was reflected from the front surface of the plate and the response was recorded with a high speed camera. The camera used in this analysis was a rotating drum camera operating at 20,000 frames per second with a maximum of 224 frames at 39 microseconds separation. Inplane tension loads were applied to the plate by means of air cylinders. Maximum biaxial load applied to the plate was 500 pounds. Plate preheating was obtained with resistance heaters located in the specimen platemore » holder with a maximum capability of 500F. Data analysis was restricted to the maximum conditions at the center of the plate. Strains were determined from the photographic data and the stresses were calculated from the strain data. Results were obtained from zero preload conditions to a maximum of 480 pounds inplane tension loads and a plate temperature of 490F. The blast load ranged from 6 to 23 psi.« less

  5. ePix: a class of architectures for second generation LCLS cameras

    DOE PAGES

    Dragone, A.; Caragiulo, P.; Markovic, B.; ...

    2014-03-31

    ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less

  6. Underwater Calibration of Dome Port Pressure Housings

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Menna, F.; Fassi, F.; Remondino, F.

    2016-03-01

    Underwater photogrammetry using consumer grade photographic equipment can be feasible for different applications, e.g. archaeology, biology, industrial inspections, etc. The use of a camera underwater can be very different from its terrestrial use due to the optical phenomena involved. The presence of the water and camera pressure housing in front of the camera act as additional optical elements. Spherical dome ports are difficult to manufacture and consequently expensive but at the same time they are the most useful for underwater photogrammetry as they keep the main geometric characteristics of the lens unchanged. Nevertheless, the manufacturing and alignment of dome port pressure housing components can be the source of unexpected changes of radial and decentring distortion, source of systematic errors that can influence the final 3D measurements. The paper provides a brief introduction of underwater optical phenomena involved in underwater photography, then presents the main differences between flat and dome ports to finally discuss the effect of manufacturing on 3D measurements in two case studies.

  7. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  8. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  9. Force Protection for Fire Fighters: Warm Zone Operations at Paramilitary Style Active Shooter Incidents in a Multi-Hazard Environment as a Fire Service Core Competency

    DTIC Science & Technology

    2012-03-01

    weapon. The paramilitary style assault by two students at Columbine High School in 1999 revealed serious shortcomings in the fire service “standby...1158 hours, four of those students were still lying on the lawn in front of the school cafeteria . Even though the scene was not secure, paramedics...Lance Kirklin, one of the severely injured students rescued from in front of the cafeteria , briefly stopped at the incident command post en route to

  10. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  11. KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility at KSC, installation is under way of the Mars Orbiter Camera (MOC) on the Mars Global Surveyor spacecraft. The MOC is one of a suite of six scientific instruments that will gather data about Martian topography, mineral distribution and weather during a two-year period. The Mars Global Surveyor is slated for launch aboard a Delta II expendable launch vehicle on Nov. 6, the beginning of a 20-day launch period.

    NASA Image and Video Library

    1996-08-19

    KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility at KSC, installation is under way of the Mars Orbiter Camera (MOC) on the Mars Global Surveyor spacecraft. The MOC is one of a suite of six scientific instruments that will gather data about Martian topography, mineral distribution and weather during a two-year period. The Mars Global Surveyor is slated for launch aboard a Delta II expendable launch vehicle on Nov. 6, the beginning of a 20-day launch period.

  12. A View of Opportunity's Dance Moves

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This rear hazard-avoidance camera image taken by the Mars Exploration Rover Opportunity on the 37th martian day, or sol, of its mission (March 2, 2004) shows the tracks left by the rover during its latest 'dance,' or series of maneuvers, around the rock outcrop near its landing site. Note the view of the lander to the far left and the light-colored outcrop below the horizon. The rear solar panels, located above the rear hazard-avoidance cameras, are captured in the uppermost part of the image.

    Since driving off the lander, Opportunity has traveled along the entire outcrop, trenched, and completed a U-turn to revisit scientifically rich spots. Two of these spots are the rock regions dubbed 'El Capitan' and 'Last Chance.' Scientists have used the instruments on the rover's arm to conclude that this area of Mars was once soaked in water for extended amounts of time, possibly providing an environment favorable for life.

  13. KSC00pp1958

    NASA Image and Video Library

    2000-12-11

    Across from the Vehicle Assembly Building and Launch Control Center, Steve Thomas (left), host of This Old House, and Norm Abram (second from left), master carpenter on the series, watch as a a videographer (in front) checks his camera. With them is astronaut John Herrington. The cast and crew of This Old House are filming at KSC for an episode of the show. Herrington is accompanying the film crew on their tour of KSC

  14. Lunar Roving Vehicle gets speed workout by Astronaut John Young

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Lunar Roving Vehicle (LRV) gets a speed workout by Astronaut John W. Young in the 'Grand Prix' run during the third Apollo 16 extravehicular activity (EVA-3) at the Descartes landing site. Note the front wheels of the LRV are off the ground. This view is a frame from motion picture film exposed by a 16mm Maurer camera held by Astronaut Charles M. Duke Jr.

  15. Contextual view of Warner's Ranch (ranch house in center and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Contextual view of Warner's Ranch (ranch house in center and trading post/barn on right), showing San Felipe Road and orientation of buildings in San Jose Valley. Note approximate locations of Overland Trail (now paved highway) in front of house and San Diego cutoff (dirt road) on left. Camera facing northwest. - Warner Ranch, Ranch House, San Felipe Road (State Highway S2), Warner Springs, San Diego County, CA

  16. 86. VIEW OF LIQUID NITROGEN STORAGE FACILITY LOCATED DIRECTLY WEST ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    86. VIEW OF LIQUID NITROGEN STORAGE FACILITY LOCATED DIRECTLY WEST OF THE SLC-3W FUEL APRON. NOTE HEAT EXCHANGER IN BACKGROUND. CAMERA TOWER LOCATED DIRECTLY IN FRONT OF LIQUID NITROGEN STORAGE TANK. NITROGEN AND HELIUM GAS STORAGE TANKS AT SOUTH END OF FUEL APRON IN LOWER RIGHT CORNER. - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 West, Napa & Alden Roads, Lompoc, Santa Barbara County, CA

  17. The MVACS Robotic Arm Camera

    NASA Astrophysics Data System (ADS)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  18. Single camera volumetric velocimetry in aortic sinus with a percutaneous valve

    NASA Astrophysics Data System (ADS)

    Clifford, Chris; Thurow, Brian; Midha, Prem; Okafor, Ikechukwu; Raghav, Vrishank; Yoganathan, Ajit

    2016-11-01

    Cardiac flows have long been understood to be highly three dimensional, yet traditional in vitro techniques used to capture these complexities are costly and cumbersome. Thus, two dimensional techniques are primarily used for heart valve flow diagnostics. The recent introduction of plenoptic camera technology allows for traditional cameras to capture both spatial and angular information from a light field through the addition of a microlens array in front of the image sensor. When combined with traditional particle image velocimetry (PIV) techniques, volumetric velocity data may be acquired with a single camera using off-the-shelf optics. Particle volume pairs are reconstructed from raw plenoptic images using a filtered refocusing scheme, followed by three-dimensional cross-correlation. This technique was applied to the sinus region (known for having highly three-dimensional flow structures) of an in vitro aortic model with a percutaneous valve. Phase-locked plenoptic PIV data was acquired at two cardiac outputs (2 and 5 L/min) and 7 phases of the cardiac cycle. The volumetric PIV data was compared to standard 2D-2C PIV. Flow features such as recirculation and stagnation were observed in the sinus region in both cases.

  19. Opto-mechanical design of the G-CLEF flexure control camera system

    NASA Astrophysics Data System (ADS)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  20. 10 CFR 851.21 - Hazard identification and assessment.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety..., biological, and safety workplace hazards using recognized exposure assessment and testing methodologies and... hazards and the established controls within 90 days after identifying such hazards. The Head of DOE Field...

  1. 10 CFR 851.21 - Hazard identification and assessment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety..., biological, and safety workplace hazards using recognized exposure assessment and testing methodologies and... hazards and the established controls within 90 days after identifying such hazards. The Head of DOE Field...

  2. 10 CFR 851.21 - Hazard identification and assessment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... Procedures must include methods to: (1) Assess worker exposure to chemical, physical, biological, or safety..., biological, and safety workplace hazards using recognized exposure assessment and testing methodologies and... hazards and the established controls within 90 days after identifying such hazards. The Head of DOE Field...

  3. DETECTION AND IDENTIFICATION OF TOXIC AIR POLLUTANTS USING FIELD PORTABLE AND AIRBORNE REMOTE IMAGING SYSTEMS

    EPA Science Inventory

    Remote sensing technologies are a class of instrument and sensor systems that include laser imageries, imaging spectrometers, and visible to thermal infrared cameras. These systems have been successfully used for gas phase chemical compound identification in a variety of field e...

  4. Identification and Quantification Soil Redoximorphic Features by Digital Image Processing

    USDA-ARS?s Scientific Manuscript database

    Soil redoximorphic features (SRFs) have provided scientists and land managers with insight into relative soil moisture for approximately 60 years. The overall objective of this study was to develop a new method of SRF identification and quantification from soil cores using a digital camera and imag...

  5. Absolute Hugoniot measurements from a spherically convergent shock using x-ray radiography

    NASA Astrophysics Data System (ADS)

    Swift, Damian C.; Kritcher, Andrea L.; Hawreliak, James A.; Lazicki, Amy; MacPhee, Andrew; Bachmann, Benjamin; Döppner, Tilo; Nilsen, Joseph; Collins, Gilbert W.; Glenzer, Siegfried; Rothman, Stephen D.; Kraus, Dominik; Falcone, Roger W.

    2018-05-01

    The canonical high pressure equation of state measurement is to induce a shock wave in the sample material and measure two mechanical properties of the shocked material or shock wave. For accurate measurements, the experiment is normally designed to generate a planar shock which is as steady as possible in space and time, and a single state is measured. A converging shock strengthens as it propagates, so a range of shock pressures is induced in a single experiment. However, equation of state measurements must then account for spatial and temporal gradients. We have used x-ray radiography of spherically converging shocks to determine states along the shock Hugoniot. The radius-time history of the shock, and thus its speed, was measured by radiographing the position of the shock front as a function of time using an x-ray streak camera. The density profile of the shock was then inferred from the x-ray transmission at each instant of time. Simultaneous measurement of the density at the shock front and the shock speed determines an absolute mechanical Hugoniot state. The density profile was reconstructed using the known, unshocked density which strongly constrains the density jump at the shock front. The radiographic configuration and streak camera behavior were treated in detail to reduce systematic errors. Measurements were performed on the Omega and National Ignition Facility lasers, using a hohlraum to induce a spatially uniform drive over the outside of a solid, spherical sample and a laser-heated thermal plasma as an x-ray source for radiography. Absolute shock Hugoniot measurements were demonstrated for carbon-containing samples of different composition and initial density, up to temperatures at which K-shell ionization reduced the opacity behind the shock. Here we present the experimental method using measurements of polystyrene as an example.

  6. Dynamics identification of a piezoelectric vibrational energy harvester by image analysis with a high speed camera

    NASA Astrophysics Data System (ADS)

    Wolszczak, Piotr; Łygas, Krystian; Litak, Grzegorz

    2018-07-01

    This study investigates dynamic responses of a nonlinear vibration energy harvester. The nonlinear mechanical resonator consists of a flexible beam moving like an inverted pendulum between amplitude limiters. It is coupled with a piezoelectric converter, and excited kinematically. Consequently, the mechanical energy input is converted into the electrical power output on the loading resistor included in an electric circuit attached to the piezoelectric electrodes. The curvature of beam mode shapes as well as deflection of the whole beam are examined using a high speed camera. The visual identification results are compared with the voltage output generated by the piezoelectric element for corresponding frequency sweeps and analyzed by the Hilbert transform.

  7. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera narrow-angle images

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.

    2003-01-01

    We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.

  8. Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr

    2008-01-01

    The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to that of an endo-scissor, but the linkage would be configured to enable adjustment of the camera angle instead of actuation of a scissor blade. It is envisioned that thicknesses of the tool shaft and the camera would be less than 4 mm, so that the camera-tipped tool could be swiftly inserted and withdrawn through a dime-size opening. Electronic cameras having dimensions of the order of millimeters are already commercially available, but their designs are not optimized for use in endoscopic brain surgery. The variety of potential endoscopic, thoracoscopic, and laparoscopic applications can be expected to increase as further development of electronic cameras yields further miniaturization and improvements in imaging performance.

  9. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †

    PubMed Central

    Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi

    2016-01-01

    During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781

  10. A View From Below the Rover Deck

    NASA Image and Video Library

    2012-08-17

    The Curiosity engineering team created this cylindrical projection view from images taken by NASA Curiosity rover rear hazard avoidance cameras underneath the rover deck on Sol 0. Pictured here are the pigeon-toed wheels in their stowed position from

  11. JPL-20170801-MSLf-0001-Rover POV Five Years of Curiosity on Mars

    NASA Image and Video Library

    2017-08-02

    Five years of images from the Mars Science Laboratory rover Curiosity's Hazard Avoidance Camera (Hazcam) were used to create this time-lapse movie. An inset map shows the rover's location in Mars' Gale Crater.

  12. Earth observations taken during the STS-103 mission

    NASA Image and Video Library

    1999-12-23

    STS103-730-032 (19-27 December 1999) --- One of the astronauts aboard the Earth-orbiting Space Shuttle Discovery used a handheld 70mm camera to capture the southern to middle Rocky Mountains in low sunlight. The middle Rockies include the Big Horn range of Wyoming (snow capped range almost center of horizon) and the Unita Mountains of northeastern Utah (snow capped range left side of horizon). The southern Rockies includes the Front Range, Sangre de Cristo Mountains, Sawatch Ranges, and the San Juan Mountains. The eastern (Front Range, Sangre de Cristo) and western ranges (Sawatch, San Juan's) are separated by intermontane basins. The southernmost basin (near center of the image) is the San Luis Valley of Colorado. On the eastern edge of the San Luis Valley are the Sangre de Cristo Mountains.

  13. Dale Reed with model in front of M2-F1

    NASA Image and Video Library

    1967-03-06

    Dale Reed with a model of the M2-F1 in front of the actual lifting body. Reed used the model to show the potential of the lifting bodies. He first flew it into tall grass to test stability and trim, then hand-launched it from buildings for longer flights. Finally, he towed the lifting-body model aloft using a powered model airplane known as the "Mothership." A timer released the model and it glided to a landing. Dale's wife Donna used a 9 mm. camera to film the flights of the model. Its stability as it glided--despite its lack of wings--convinced Milt Thompson and some Flight Research Center engineers including the center director, Paul Bikle, that a piloted lifting body was possible.

  14. United States Homeland Security and National Biometric Identification

    DTIC Science & Technology

    2002-04-09

    security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are

  15. Observation of Planetary Motion Using a Digital Camera

    ERIC Educational Resources Information Center

    Meyn, Jan-Peter

    2008-01-01

    A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…

  16. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  17. 40 CFR 261.7 - Residues of hazardous waste in empty containers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Residues of hazardous waste in empty containers. 261.7 Section 261.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE General § 261.7 Residues of hazardous...

  18. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  19. Bird's-Eye View of Opportunity at 'Erebus' (Polar)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This view combines frames taken by the panoramic camera (Pancam) on NASA's Mars Exploration Rover Opportunity on the rover's 652nd through 663rd Martian days, or sols (Nov. 23 to Dec. 5, 2005), at the edge of 'Erebus Crater.' The mosaic is presented as a polar projection. This type of projection provides a kind of overhead view of all of the surrounding terrain. Opportunity examined targets on the outcrop called 'Rimrock' in front of the rover, testing the mobility and operation of Opportunity's robotic arm. The view shows examples of the dunes and ripples that Opportunity has been crossing as the rover drives on the Meridiani plains.

    This view is an approximate true color rendering composed of images taken through the camera's 750-nanometer, 530-nanometer and 430-nanometer filters.

  20. WPSS: watching people security services

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Baan, Jan; Borsboom, Sander; van Zon, Kasper; Luo, Xinghan; Loke, Ben; Stoeller, Bram; van Kuilenburg, Hans; Dijk, Judith

    2013-10-01

    To improve security, the number of surveillance cameras is rapidly increasing. However, the number of human operators remains limited and only a selection of the video streams are observed. Intelligent software services can help to find people quickly, evaluate their behavior and show the most relevant and deviant patterns. We present a software platform that contributes to the retrieval and observation of humans and to the analysis of their behavior. The platform consists of mono- and stereo-camera tracking, re-identification, behavioral feature computation, track analysis, behavior interpretation and visualization. This system is demonstrated in a busy shopping mall with multiple cameras and different lighting conditions.

  1. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  2. Dichotomous Results Using Polarized Illumination with Single Chip Color Cameras

    DTIC Science & Technology

    2013-01-01

    response is both strain and chemically induced at an interior laminate layer interface. The size and location of the pattern are crucial and not the...the ideal for making photoelastic stress measurements, which were not required for this sample. ...............7 Figure 8. A single laminate as seen...7 Figure 9. The observed response was isolated to a single layer of the laminate structure. The analyzer is in front of the base

  3. DCE - PS Linteris in front of rack

    NASA Image and Video Library

    2016-08-12

    STS083-312-017 (4-8 April 1997) --- Payload specialist Gregory T. Linteris sets up a 35mm camera, one of three photographic/recording systems on the Drop Combustion Experiment (DCE) Apparatus. DCE is an enclosed chamber in which Helium-Oxygen fuel mixtures are injected and burned as single droplets. Combustion of fuel droplets is an important part of many operations, home heating, power production by gas turbines and combustion of gasoline in an automobile engine.

  4. Explosive Testing of Class 1.3 Rocket Booster Propellant

    DTIC Science & Technology

    1994-08-01

    molds were lined with 0.025 mm (0.001 in.) Velostat conductive plastic sheet and sprayed with a mold release that dried leaving fine Teflon powder... Velostat sheet (0.03 in.) was wrapped around the sample and grounded for improved electrostatic safety. Similar to previous cylinder tests, the...layer of thin Velostat plastic sheet, its contribution to camera viewing distortion of the flame front is not known. Overall, an average velocity over

  5. Impact Flash Monitoring Facility on the Deep Space Gateway

    NASA Astrophysics Data System (ADS)

    Needham, D. H.; Moser, D. E.; Suggs, R. M.; Cooke, W. J.; Kring, D. A.; Neal, C. R.; Fassett, C. I.

    2018-02-01

    Cameras mounted to the Deep Space Gateway exterior will detect flashes caused by impacts on the lunar surface. Observed flashes will help constrain the current lunar impact flux and assess hazards faced by crews living and working in cislunar space.

  6. Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing

    NASA Technical Reports Server (NTRS)

    Crooke, Julie A.

    2003-01-01

    The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

  7. Broadband Achromatic Telecentric Lens

    NASA Technical Reports Server (NTRS)

    Mouroulis, Pantazis

    2007-01-01

    A new type of lens design features broadband achromatic performance as well as telecentricity, using a minimum number of spherical elements. With appropriate modifications, the lens design form can be tailored to cover the range of response of the focal-plane array, from Si (400-1,000 nm) to InGaAs (400-1,700 or 2,100 nm) or InSb/HgCdTe reaching to 2,500 nm. For reference, lenses typically are achromatized over the visible wavelength range of 480-650 nm. In remote sensing applications, there is a need for broadband achromatic telescopes, normally satisfied with mirror-based systems. However, mirror systems are not always feasible due to size or geometry restrictions. They also require expensive aspheric surfaces. Non-obscured mirror systems can be difficult to align and have a limited (essentially one-dimensional) field of view. Centrally obscured types have a two-dimensional but very limited field in addition to the obscuration. Telecentricity is a highly desirable property for matching typical spectrometer types, as well as for reducing the variation of the angle of incidence and cross-talk on the detector for simple camera types. This rotationally symmetric telescope with no obscuration and using spherical surfaces and selected glass types fills a need in the range of short focal lengths. It can be used as a compact front unit for a matched spectrometer, as an ultra-broadband camera objective lens, or as the optics of an integrated camera/spectrometer in which the wavelength information is obtained by the use of strip or linear variable filters on the focal plane array. This kind of camera and spectrometer system can find applications in remote sensing, as well as in-situ applications for geological mapping and characterization of minerals, ecological studies, and target detection and identification through spectral signatures. Commercially, the lens can be used in quality-control applications via spectral analysis. The lens design is based on the rear landscape lens with the aperture stop in front of all elements. This allows sufficient room for telecentricity in addition to making the stop easily accessible. The crucial design features are the use of a doublet with an ultra-low dispersion glass (fluorite or S-FPL53), and the use of a strong negative element, which enables flat field and telecentricity in conjunction with the last (field lens) element. The field lens also can be designed to be in contact with the array, a feature that is desirable in some applications. The lens has a 20deg field of view, for a 50-mm focal length, and is corrected over the range of wavelengths of 450-2,300 nm. Transverse color, which is the most pernicious aberration for spectroscopic work, is controlled at the level of 1 m or below at 0.7 m field and 5 m at full field. The maximum chief ray angle is less than 1.7 , providing good telecentricity. An additional feature of this lens is that it is made exclusively with glasses that provide good transmission up to 2,300 nm and even some transmission to 2,500 nm; thus, the lens can be used in applications that cover the entire solar-reflected spectrum. Alternative realizations are possible that provide enhanced resolution and even less transverse color over a narrower wavelength range.

  8. Hazards and hazard combinations relevant for the safety of nuclear power plants

    NASA Astrophysics Data System (ADS)

    Decker, Kurt; Brinkman, Hans; Raimond, Emmanuel

    2017-04-01

    The potential of the contemporaneous impact of different, yet causally related, hazardous events and event cascades on nuclear power plants is a major contributor to the overall risk of nuclear installations. In the aftermath of the Fukushima accident, which was caused by a combination of severe ground shaking by an earthquake, an earthquake-triggered tsunami and the disruption of the plants from the electrical grid by a seismically induced landslide, hazard combinations and hazard cascades moved into the focus of nuclear safety research. We therefore developed an exhaustive list of external hazards and hazard combinations which pose potential threats to nuclear installations in the framework of the European project ASAMPSAE (Advanced Safety Assessment: Extended PSA). The project gathers 31 partners from Europe, North Amerika and Japan. The list comprises of exhaustive lists of natural hazards, external man-made hazards, and a cross-correlation matrix of these hazards. The hazard list is regarded comprehensive by including all types of hazards that were previously cited in documents by IAEA, the Western European Nuclear Regulators Association (WENRA), and others. 73 natural hazards and 24 man-made external hazards are included. Natural hazards are grouped into seismotectonic hazards, flooding and hydrological hazards, extreme values of meteorological phenomena, rare meteorological phenomena, biological hazards / infestation, geological hazards, and forest fire / wild fire. The list of external man-made hazards includes industry accidents, military accidents, transportation accidents, pipeline accidents and other man-made external events. The large number of different hazards results in the extremely large number of 5.151 theoretically possible hazard combinations (not considering hazard cascades). In principle all of these combinations are possible to occur by random coincidence except for 82 hazard combinations that - depending on the time scale - are mutually exclusive (e.g., extremely high air temperature and surface ice). Our dataset further provides information on hazard combinations which are more likely to occur than just by random coincidence. 577 correlations between individual hazards are identified by expert opinion and shown in a cross-correlation chart. Combinations discriminate between: (1) causally connected hazards (cause-effect relation) where one hazard (e.g., costal erosion) may be caused by another hazard (e.g., storm surge); or where one hazard (e.g., high wind) is a prerequisite for a correlated hazard (e.g., storm surge). The identified causal links are not commutative. (2) Associated hazards ("contemporary" events) which are probable to occur at the same time due to a common root cause (e.g., a cold front of a meteorological low pressure area which leads to a drop of air pressure, high wind, thunderstorm, lightning, heavy rain and hail). The root cause may not necessarily be regarded as a hazard by itself. The hazard list and the hazard correlation chart may serve as a starting point for the hazard analysis process for nuclear installations in Level 1 PSA as outlined by IAEA (2010), the definition of design basis for nuclear reactors, and the assessment of design extension conditions as required by WENRA-RHWG (2014). It may further be helpful for the identification of hazard combinations and hazard cascades which threaten other critical infrastructure. References: Decker, K. & Brinkman, H., 2017. List of external hazards to be considered in extended PSA. Report No. ASAMPSA_E/WP21/D21.2/2017-41 - IRSN/ PSN-RES/SAG/2017-00011 IAEA, 2010. Development and Application of Level 1 Probabilistic Safety Assessment for Nuclear Power Plants. Safety Guide No. SSG-3, Vienna. http://www-pub.iaea.org/books/ WENRA-RHWG, 2014. WENRA Safety Reference Levels for Existing Reactors. Update in Relation to Lessons Learned from TEPCO Fukushima Dai-Ichi Accident. http://www.wenra.org/publications/

  9. A Versatile Time-Lapse Camera System Developed by the Hawaiian Volcano Observatory for Use at Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Orr, Tim R.; Hoblitt, Richard P.

    2008-01-01

    Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.

  10. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management of Excluded Hazardous Secondary... hazardous waste, and the potential cost of closing the facility as a treatment, storage, and disposal...

  11. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management of Excluded Hazardous Secondary... hazardous waste, and the potential cost of closing the facility as a treatment, storage, and disposal...

  12. 40 CFR 261.142 - Cost estimate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) IDENTIFICATION AND LISTING OF HAZARDOUS WASTE Financial Requirements for Management of Excluded Hazardous Secondary... hazardous waste, and the potential cost of closing the facility as a treatment, storage, and disposal...

  13. 16 CFR 1512.16 - Requirements for reflectors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 1512.16 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT... vehicle headlamps. The use of reflector combinations off the center plane of the bicycle (defined in...) Front reflector. The reflector or mount shall not contact the ground plane when the bicycle is resting...

  14. Foam Experiment Hardware are Flown on Microgravity Rocket MAXUS 4

    NASA Astrophysics Data System (ADS)

    Lockowandt, C.; Löth, K.; Jansson, O.; Holm, P.; Lundin, M.; Schneider, H.; Larsson, B.

    2002-01-01

    The Foam module was developed by Swedish Space Corporation and was used for performing foam experiments on the sounding rocket MAXUS 4 launched from Esrange 29 April 2001. The development and launch of the module has been financed by ESA. Four different foam experiments were performed, two aqueous foams by Doctor Michele Adler from LPMDI, University of Marne la Vallée, Paris and two non aqueous foams by Doctor Bengt Kronberg from YKI, Institute for Surface Chemistry, Stockholm. The foam was generated in four separate foam systems and monitored in microgravity with CCD cameras. The purpose of the experiment was to generate and study the foam in microgravity. Due to loss of gravity there is no drainage in the foam and the reactions in the foam can be studied without drainage. Four solutions with various stabilities were investigated. The aqueous solutions contained water, SDS (Sodium Dodecyl Sulphate) and dodecanol. The organic solutions contained ethylene glycol a cationic surfactant, cetyl trimethyl ammonium bromide (CTAB) and decanol. Carbon dioxide was used to generate the aqueous foam and nitrogen was used to generate the organic foam. The experiment system comprised four complete independent systems with injection unit, experiment chamber and gas system. The main part in the experiment system is the experiment chamber where the foam is generated and monitored. The chamber inner dimensions are 50x50x50 mm and it has front and back wall made of glass. The front window is used for monitoring the foam and the back window is used for back illumination. The front glass has etched crosses on the inside as reference points. In the bottom of the cell is a glass frit and at the top is a gas in/outlet. The foam was generated by injecting the experiment liquid in a glass frit in the bottom of the experiment chamber. Simultaneously gas was blown through the glass frit and a small amount of foam was generated. This procedure was performed at 10 bar. Then the pressure was lowered in the experiment chamber to approximately 0,1 bar to expand the foam to a dry foam that filled the experiment chamber. The foam was regenerated during flight by pressurise the cell and repeat the foam generation procedures. The module had 4 individual experiment chambers for the four different solutions. The four experiment chambers were controlled individually with individual experiment parameters and procedures. The gas system comprise on/off valves and adjustable valves to control the pressure and the gas flow and liquid flow during foam generation. The gas system can be divided in four sections, each section serving one experiment chamber. The sections are partly connected in two pairs with common inlet and outlet. The two pairs are supplied with a 1l gas bottle each filled to a pressure of 40 bar and a pressure regulator lowering the pressure from 40 bar to 10 bar. Two sections are connected to the same outlet. The gas outlets from the experiment chambers are connected to two symmetrical placed outlets on the outer structure with diffusers not to disturb the g-levels. The foam in each experiment chamber was monitored with one tomography camera and one overview camera (8 CCD cameras in total). The tomography camera is placed on a translation table which makes it possible to move it in the depth direction of the experiment chamber. The video signal from the 8 CCD cameras were stored onboard with two DV recorders. Two video signals were also transmitted to ground for real time evaluation and operation of the experiment. Which camera signal that was transmitted to ground could be selected with telecommands. With help of the tomography system it was possible to take sequences of images of the foam at different depths in the foam. This sequences of images are used for constructing a 3-D model of the foam after flight. The overview camera has a fixed position and a field of view that covers the total experiment chamber. This camera is used for monitoring the generation of foam and the overall behaviour of the foam. The experiment was performed successfully with foam generation in all 4 experiment chambers. Foam was also regenerated during flight with telecommands. The experiment data is under evaluation.

  15. Control strategies for planetary rover motion and manipulator control

    NASA Technical Reports Server (NTRS)

    Trautwein, W.

    1973-01-01

    An unusual insect-like vehicle designed for planetary surface exploration is made the occasion for a discussion of control concepts in path selection, hazard detection, obstacle negotiation, and soil sampling. A control scheme which actively articulates the pitching motion between a single-loop front module and a dual loop rear module leads to near optimal behavior in soft soil; at the same time the vehicle's front module acts as a reliable tactile forward probe with a detection range much longer than the stopping distance. Some optimal control strategies are discussed, and the photos of a working scale model are displayed.

  16. Investigations of simulated aircraft flight through thunderstorm outflows

    NASA Technical Reports Server (NTRS)

    Frost, W.; Crosby, B.

    1978-01-01

    The effects of wind shear on aircraft flying through thunderstorm gust fronts were investigated. A computer program was developed to solve the two dimensional, nonlinear equations of aircraft motion, including wind shear. The procedure described and documented accounts for spatial and temporal variations of the aircraft within the flow regime. Analysis of flight paths and control inputs necessary to maintain specified trajectories for aircraft having characteristics of DC-8, B-747, augmentor wing STOL, and DHC-6 aircraft was recorded. From the analysis an attempt was made to find criteria for reduction of the hazards associated with landing through thunderstorm gust fronts.

  17. Visible light communication based vehicle positioning using LED street light and rolling shutter CMOS sensors

    NASA Astrophysics Data System (ADS)

    Do, Trong Hop; Yoo, Myungsik

    2018-01-01

    This paper proposes a vehicle positioning system using LED street lights and two rolling shutter CMOS sensor cameras. In this system, identification codes for the LED street lights are transmitted to camera-equipped vehicles through a visible light communication (VLC) channel. Given that the camera parameters are known, the positions of the vehicles are determined based on the geometric relationship between the coordinates of the LEDs in the images and their real world coordinates, which are obtained through the LED identification codes. The main contributions of the paper are twofold. First, the collinear arrangement of the LED street lights makes traditional camera-based positioning algorithms fail to determine the position of the vehicles. In this paper, an algorithm is proposed to fuse data received from the two cameras attached to the vehicles in order to solve the collinearity problem of the LEDs. Second, the rolling shutter mechanism of the CMOS sensors combined with the movement of the vehicles creates image artifacts that may severely degrade the positioning accuracy. This paper also proposes a method to compensate for the rolling shutter artifact, and a high positioning accuracy can be achieved even when the vehicle is moving at high speeds. The performance of the proposed positioning system corresponding to different system parameters is examined by conducting Matlab simulations. Small-scale experiments are also conducted to study the performance of the proposed algorithm in real applications.

  18. Imaging fall Chinook salmon redds in the Columbia River with a dual-frequency identification sonar

    USGS Publications Warehouse

    Tiffan, K.F.; Rondorf, D.W.; Skalicky, J.J.

    2004-01-01

    We tested the efficacy of a dual-frequency identification sonar (DIDSON) for imaging and enumeration of fall Chinook salmon Oncorhynchus tshawytscha redds in a spawning area below Bonneville Dam on the Columbia River. The DIDSON uses sound to form near-video-quality images and has the advantages of imaging in zero-visibility water and possessing a greater detection range and field of view than underwater video cameras. We suspected that the large size and distinct morphology of a fall Chinook salmon redd would facilitate acoustic imaging if the DIDSON was towed near the river bottom so as to cast an acoustic shadow from the tailspill over the redd pocket. We tested this idea by observing 22 different redds with an underwater video camera, spatially referencing their locations, and then navigating to them while imaging them with the DIDSON. All 22 redds were successfully imaged with the DIDSON. We subsequently conducted redd searches along transects to compare the number of redds imaged by the DIDSON with the number observed using an underwater video camera. We counted 117 redds with the DIDSON and 81 redds with the underwater video camera. Only one of the redds observed with the underwater video camera was not also documented by the DIDSON. In spite of the DIDSON's high cost, it may serve as a useful tool for enumerating fall Chinook salmon redds in conditions that are not conducive to underwater videography.

  19. Method and apparatus for identification of hazards along an intended travel route

    NASA Technical Reports Server (NTRS)

    Kronfeld, Kevin M. (Inventor); Lapis, Mary Beth (Inventor); Walling, Karen L. (Inventor); Chackalackal, Mathew S. (Inventor)

    2003-01-01

    Targets proximate to a travel route plan were evaluated to determine hazardousness. Projected geometric representation of a vehicle determines intrusion of hazardous targets along travel route plan. Geometric representation of hazardous targets projected along motion vector to determine intrusion upon travel route plan. Intrusion assessment presented on user display.

  20. Experimental investigation of detonation waves instabilities in liquid high explosives

    NASA Astrophysics Data System (ADS)

    Sosikov, V. A.; Torunov, S. I.; Utkin, A. V.; Mochalova, V. M.; Rapota, D. Yu

    2018-01-01

    Experimental investigation of unstable detonation front structure in mixtures of liquid high explosives (nitromethane and FEFO—bis-(2-fluor-2.2-dinitroethyl)-formal) with inert diluents (acetone, methanol, DETA—diethylene triamine) has been carried out. Inhomogeneities have been registered by electro-optical camera NANOGATE 4BP allowing to make 4 frames with the exposure time 10 ns. According to experimental results the detonation front in nitromethane-acetone mixture is unstable. It is evident that pulsations on detonation front do not form spatial periodic structure and their dimensions differ several times. But mean longitudinal size of pulsation is about 500 μm at 20 wt% of acetone concentration. This means that the typical size of cell equals to reaction zone width. The same structure of cellular front have been registered in 70/30 FEFO-methanol mixture. Second kind of instability, failure waves, was observed in neat nitromethane at the free surface. In this case the stability loss result in turbulent flow which is clearly detected in the shots obtained. Adding small amount of DETA (0.5 wt%) results in disappearance of the failure waves and flow stabilization. The effect is caused by the fact that DETA sharply accelerates initial rate of chemical reaction because it is sensitizer for nitromethane.

  1. The Role of Counterintelligence in the European Theater of Operations During World War II

    DTIC Science & Technology

    1993-06-04

    revolvers, Minox cameras, portable typewriters, 48 fingerprint cameras, latent fingerprint kits, handcuffs, and listening and recording devices.13 This...Comments from the detachments indicated that the fingerprint equipment, and listening and recording devices were of little use. However, the revolvers...40-49. 138 Moulage* 2 Fingerprinting 2 Latent Fingerprinting 3 System of Identification 1 Codes and Ciphers 1 Handwriting Comparison 2 Documentary

  2. 75 FR 73972 - Hazardous Waste Management System; Identification and Listing of Hazardous Waste; Removal of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ... adverse comment by October 25, 2010, the direct final rule would not take effect and we would publish a.... Lists of Subjects in 40 CFR Part 261 Environmental Protection, Hazardous waste, Recycling, Reporting and...

  3. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  4. Exploring the effects of driving experience on hazard awareness and risk perception via real-time hazard identification, hazard classification, and rating tasks.

    PubMed

    Borowsky, Avinoam; Oron-Gilad, Tal

    2013-10-01

    This study investigated the effects of driving experience on hazard awareness and risk perception skills. These topics have previously been investigated separately, yet a novel approach is suggested where hazard awareness and risk perception are examined concurrently. Young, newly qualified drivers, experienced drivers, and a group of commercial drivers, namely, taxi drivers performed three consecutive tasks: (1) observed 10 short movies of real-world driving situations and were asked to press a button each time they identified a hazardous situation; (2) observed one of three possible sub-sets of 8 movies (out of the 10 they have seen earlier) for the second time, and were asked to categorize them into an arbitrary number of clusters according to the similarity in their hazardous situation; and (3) observed the same sub-set for a third time and following each movie were asked to rate its level of hazardousness. The first task is considered a real-time identification task while the other two are performed using hindsight. During it participants' eye movements were recorded. Results showed that taxi drivers were more sensitive to hidden hazards than the other driver groups and that young-novices were the least sensitive. Young-novice drivers also relied heavily on materialized hazards in their categorization structure. In addition, it emerged that risk perception was derived from two major components: the likelihood of a crash and the severity of its outcome. Yet, the outcome was rarely considered under time pressure (i.e., in real-time hazard identification tasks). Using hindsight, when drivers were provided with the opportunity to rate the movies' hazardousness more freely (rating task) they considered both components. Otherwise, in the categorization task, they usually chose the severity of the crash outcome as their dominant criterion. Theoretical and practical implications are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Experimental and Modeling Studies of Crush, Puncture, and Perforation Scenarios in the Steven Impact Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandersall, K S; Chidester, S K; Forbes, J W

    2002-06-28

    The Steven test and associated modeling has greatly increased the fundamental knowledge of practical predictions of impact safety hazards for confined and unconfined explosive charges. Building on a database of initial work, experimental and modeling studies of crush, puncture, and perforation scenarios were investigated using the Steven impact test. The descriptions of crush, puncture, and perforation arose from safety scenarios represented by projectile designs that ''crush'' the energetic material or either ''puncture'' with a pinpoint nose or ''perforate'' the front cover with a transportation hook. As desired, these scenarios offer different aspects of the known mechanisms that control ignition: friction,more » shear and strain. Studies of aged and previously damaged HMX-based high explosives included the use of embedded carbon foil and carbon resistor gauges, high-speed cameras, and blast wave gauges to determine the pressure histories, time required for an explosive reaction, and the relative violence of those reactions, respectively. Various ignition processes were modeled as the initial reaction rate expression in the Ignition and Growth reaction rate equations. Good agreement with measured threshold velocities, pressure histories, and times to reaction was calculated for LX-04 impacted by several projectile geometries using a compression dependent ignition term and an elastic-plastic model with a reasonable yield strength for impact strain rates.« less

  6. Sedimentary structures formed under water surface waves: examples from a sediment-laden flash flood observed by remote camer

    NASA Astrophysics Data System (ADS)

    Froude, Melanie; Alexander, Jan; Cole, Paul; Barclay, Jenni

    2014-05-01

    On 13-14 October 2012, Tropical Storm Rafael triggered sediment-laden flash floods in the Belham Valley on Montserrat, West Indies. Rainfall was continuous for ~38 hours and intensity peaked at 48 mm/hr. Flow was strongly unsteady, turbulent with sediment concentrations varying up to hyperconcentrated. Time-lapse images captured at >1 frame per second by remote camera overlooking a surveyed valley section show the development of trains of water surface waves at multiple channel locations during different flow stages. Waves grew and diminished in height and remained stationary or migrated upstream. Trains of waves persisted for <5 minutes, until a single wave broke, sometimes initiating the breaking of adjacent waves within the train. Channel-wide surges (bores) propagating downstream with distinct turbulent flow fronts, were observed at irregular intervals during and up to 7 hours after peak stage. These bores are mechanically similar to breaking front tidal bores and arid flood bores, and resulted in a sudden increase in flow depth and velocity. When a bore front came into close proximity (within ~10 m) upstream of a train of water surface waves, the waves appeared to break simultaneously generating a localised surge of water upstream, that was covered by the bore travelling downstream. Those trains in which waves did not break during the passage of a bore temporarily reduced in height. In both cases, water surface waves reformed immediately after the surge in the same location. Deposits from the event, were examined in <4 m deep trenches ~0.5 km downstream of the remote camera. These contained laterally extensive lenticular and sheet-like units comprised of varying admixtures of sand and gravel that are attributed to antidunes, and associated transitions from upper-stage-plane-beds. Some of the structures are organised within concave upward sequences which contain downflow shifts between foreset and backset laminae; interpreted as trough fills from chute-and-pools or water surface wave breaking. At least 90% of the deposit is interpreted upper flow regime origin. The sequence, geometry and lamina-scale texture of the sedimentary structures will be discussed with reference to remote camera images of rapidly varying, unsteady and pulsatory flow behaviour.

  7. Variable Shadow Screens for Imaging Optical Devices

    NASA Technical Reports Server (NTRS)

    Lu, Ed; Chretien, Jean L.

    2004-01-01

    Variable shadow screens have been proposed for reducing the apparent brightnesses of very bright light sources relative to other sources within the fields of view of diverse imaging optical devices, including video and film cameras and optical devices for imaging directly into the human eye. In other words, variable shadow screens would increase the effective dynamic ranges of such devices. Traditionally, imaging sensors are protected against excessive brightness by use of dark filters and/or reduction of iris diameters. These traditional means do not increase dynamic range; they reduce the ability to view or image dimmer features of an image because they reduce the brightness of all parts of an image by the same factor. On the other hand, a variable shadow screen would darken only the excessively bright parts of an image. For example, dim objects in a field of view that included the setting Sun or bright headlights could be seen more readily in a picture taken through a variable shadow screen than in a picture of the same scene taken through a dark filter or a narrowed iris. The figure depicts one of many potential variations of the basic concept of the variable shadow screen. The shadow screen would be a normally transparent liquid-crystal matrix placed in front of a focal-plane array of photodetectors in a charge-coupled-device video camera. The shadow screen would be placed far enough from the focal plane so as not to disrupt the focal-plane image to an unacceptable degree, yet close enough so that the out-of-focus shadows cast by the screen would still be effective in darkening the brightest parts of the image. The image detected by the photodetector array itself would be used as feedback to drive the variable shadow screen: The video output of the camera would be processed by suitable analog and/or digital electronic circuitry to generate a negative partial version of the image to be impressed on the shadow screen. The parts of the shadow screen in front of those parts of the image with brightness below a specified threshold would be left transparent; the parts of the shadow screen in front of those parts of the image where the brightness exceeded the threshold would be darkened by an amount that would increase with the excess above the threshold.

  8. Automated gait analysis in the open-field test for laboratory mice.

    PubMed

    Leroy, Toon; Silva, Mitchell; D'Hooge, Rudi; Aerts, Jean-Marie; Berckmans, Daniel

    2009-02-01

    In this article, an automated and accurate mouse observation method, based on a conventional test for motor function evaluation, is outlined. The proposed measurement technique was integrated in a regular open-field test, where the trajectory and locomotion of a free-moving mouse were measured simultaneously. The system setup consisted of a transparent cage and a camera placed below it with its lens pointing upward, allowing for images to be captured from underneath the cage while the mouse was walking on the transparent cage floor. Thus, additional information was obtained about the position of the limbs of the mice for gait reconstruction. In a first step, the camera was calibrated as soon as it was fixed in place. A linear calibration factor, relating distances in image coordinates to real-world dimensions, was determined. In a second step, the mouse was located and its body contour segmented from the image by subtracting a previously taken "background" image of the empty cage from the camera image. In a third step, the movement of the mouse was analyzed and its speed estimated from its location in the past few images. If the speed was above a 1-sec threshold, the mouse was recognized to be running, and the image was further processed for footprint recognition. In a fourth step, color filtering was applied within the recovered mouse region to measure the position of the mouse's paws, which were visible in the image as small pink spots. Paws that were detected at the same location in a number of subsequent images were kept as footprints-that is, paws in contact with the cage floor. The footprints were classified by their position relative to the mouse's outline as corresponding to the front left or right paw or the hind left or right paw. Finally, eight parameters were calculated from the footprint pattern to describe the locomotion of the mouse: right/left overlap, front/hind base, right/left front limb stride, and right/left hind limb stride. As an application, the system was tested using normal mice and mice displaying pentobarbital-induced ataxia. The footprint parameters measured using the proposed system showed differences of 10% to 20% between normal and ataxic mice.

  9. Black and White Children's Racial Identification and Preference

    ERIC Educational Resources Information Center

    Mahan, Juneau

    1976-01-01

    Black and white children ranging in age from three to seven were asked to answer questions pertaining to a black doll and a white doll placed in front of them in order to partially replicate Clark's and Clark's 1941 study on racial identification and preference in children, including a white group of children. It was found that black and white…

  10. Automated Purgatoid Identification: Final Report

    NASA Technical Reports Server (NTRS)

    Wood, Steven

    2011-01-01

    Driving on Mars is hazardous: technical problems and unforeseen natural hazards can end a mission quickly at the worst, or result in long delays at best. This project is focused on helping to mitigate hazards posed to rovers by purgatoids: small (less than 1 m high, less than 10 m wide), ripple-like eolian bedforms commonly found scattered across the Meridiani Planum region of Mars. Due to the poorly consolidated nature of purgatoids and multiple past episodes of rovers getting stuck in them, identification and avoidance of these eolian bedforms is an important feature of rover path planning (NASA, 2011).

  11. Thermal Inertia of Rocks and Rock Populations and Implications for Landing Hazards on Mars

    NASA Technical Reports Server (NTRS)

    Golombek, M. P.; Jakosky, B. M.; Mellon, M. T.

    2001-01-01

    Rocks represent an obvious potential hazard to a landing spacecraft. They also represent an impediment to rover travel and objects of prime scientific interest. Although Mars Orbiter Camera (MOC) images are of high enough resolution to distinguish the largest rocks (an extremely small population several meters diameter or larger), traditionally the abundance and distribution of rocks on Mars have been inferred from thermal inertia and radar measurements, our meager ground truth sampling of landing sites, and terrestrial rock populations. In this abstract, we explore the effective thermal inertia of rocks and rock populations, interpret the results in terms of abundances and populations of potentially hazardous rocks, and conclude with interpretations of rock hazards on the Martian surface and in extremely high thermal inertia areas.

  12. Pore invasion dynamics during fluid front displacement in porous media determine functional pore size distribution and phase entrapment

    NASA Astrophysics Data System (ADS)

    Moebius, F.; Or, D.

    2012-12-01

    Dynamics of fluid fronts in porous media shape transport properties of the unsaturated zone and affect management of petroleum reservoirs and their storage properties. What appears macroscopically as smooth and continuous motion of a displacement fluid front may involve numerous rapid interfacial jumps often resembling avalanches of invasion events. Direct observations using high-speed camera and pressure sensors in sintered glass micro-models provide new insights on the influence of flow rates, pore size, and gravity on invasion events and on burst size distribution. Fundamental differences emerge between geometrically-defined pores and "functional" pores invaded during a single burst (invasion event). The waiting times distribution of individual invasion events and decay times of inertial oscillations (following a rapid interfacial jump) are characteristics of different displacement regimes. An invasion percolation model with gradients and including the role of inertia provide a framework for linking flow regimes with invasion sequences and phase entrapment. Model results were compared with measurements and with early studies on invasion burst sizes and waiting times distribution during slow drainage processes by Måløy et al. [1992]. The study provides new insights into the discrete invasion events and their weak links with geometrically-deduced pore geometry. Results highlight factors controlling pore invasion events that exert strong influence on macroscopic phenomena such as front morphology and residual phase entrapment shaping hydraulic properties after the passage of a fluid front.

  13. Electro-optical detector for use in a wide mass range mass spectrometer

    NASA Technical Reports Server (NTRS)

    Giffin, Charles E. (Inventor)

    1976-01-01

    An electro-optical detector is disclosed for use in a wide mass range mass spectrometer (MS), in the latter the focal plane is at or very near the exit end of the magnetic analyzer, so that a strong magnetic field of the order of 1000G or more is present at the focal plane location. The novel detector includes a microchannel electron multiplier array (MCA) which is positioned at the focal plane to convert ion beams which are focused by the MS at the focal plane into corresponding electron beams which are then accelerated to form visual images on a conductive phosphored surface. These visual images are then converted into images on the target of a vidicon camera or the like for electronic processing. Due to the strong magnetic field at the focal plane, in one embodiment of the invention, the MCA with front and back parallel ends is placed so that its front end forms an angle of not less than several degrees, preferably on the order of 10.degree.-20.degree., with respect to the focal plane, with the center line of the front end preferably located in the focal plane. In another embodiment the MCA is wedge-shaped, with its back end at an angle of about 10.degree.-20.degree. with respect to the front end. In this embodiment the MCA is placed so that its front end is located at the focal plane.

  14. Vision guided landing of an an autonomous helicopter in hazardous terrain

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Montgomery, Jim

    2005-01-01

    Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.

  15. High-speed imaging system for observation of discharge phenomena

    NASA Astrophysics Data System (ADS)

    Tanabe, R.; Kusano, H.; Ito, Y.

    2008-11-01

    A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.

  16. CdTe Based Hard X-ray Imager Technology For Space Borne Missions

    NASA Astrophysics Data System (ADS)

    Limousin, Olivier; Delagnes, E.; Laurent, P.; Lugiez, F.; Gevin, O.; Meuris, A.

    2009-01-01

    CEA Saclay has recently developed an innovative technology for CdTe based Pixelated Hard X-Ray Imagers with high spectral performance and high timing resolution for efficient background rejection when the camera is coupled to an active veto shield. This development has been done in a R&D program supported by CNES (French National Space Agency) and has been optimized towards the Simbol-X mission requirements. In the latter telescope, the hard X-Ray imager is 64 cm² and is equipped with 625µm pitch pixels (16384 independent channels) operating at -40°C in the range of 4 to 80 keV. The camera we demonstrate in this paper consists of a mosaic of 64 independent cameras, divided in 8 independent sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique 1 cm² component, juxtaposable on its four sides. Recently, promising results have been obtained from the first micro-camera prototypes called Caliste 64 and will be presented to illustrate the capabilities of the device as well as the expected performance of an instrument based on it. The modular design of Caliste enables to consider extended developments toward IXO type mission, according to its specific scientific requirements.

  17. Compact touchless fingerprint reader based on digital variable-focus liquid lens

    NASA Astrophysics Data System (ADS)

    Tsai, C. W.; Wang, P. J.; Yeh, J. A.

    2014-09-01

    Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.

  18. Inverse algorithms for 2D shallow water equations in presence of wet dry fronts: Application to flood plain dynamics

    NASA Astrophysics Data System (ADS)

    Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.

    2016-11-01

    The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.

  19. Preliminary hazards analysis -- vitrification process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coordes, D.; Ruggieri, M.; Russell, J.

    1994-06-01

    This paper presents a Preliminary Hazards Analysis (PHA) for mixed waste vitrification by joule heating. The purpose of performing a PHA is to establish an initial hazard categorization for a DOE nuclear facility and to identify those processes and structures which may have an impact on or be important to safety. The PHA is typically performed during and provides input to project conceptual design. The PHA is then followed by a Preliminary Safety Analysis Report (PSAR) performed during Title 1 and 2 design. The PSAR then leads to performance of the Final Safety Analysis Report performed during the facility`s constructionmore » and testing. It should be completed before routine operation of the facility commences. This PHA addresses the first four chapters of the safety analysis process, in accordance with the requirements of DOE Safety Guidelines in SG 830.110. The hazards associated with vitrification processes are evaluated using standard safety analysis methods which include: identification of credible potential hazardous energy sources; identification of preventative features of the facility or system; identification of mitigative features; and analyses of credible hazards. Maximal facility inventories of radioactive and hazardous materials are postulated to evaluate worst case accident consequences. These inventories were based on DOE-STD-1027-92 guidance and the surrogate waste streams defined by Mayberry, et al. Radiological assessments indicate that a facility, depending on the radioactive material inventory, may be an exempt, Category 3, or Category 2 facility. The calculated impacts would result in no significant impact to offsite personnel or the environment. Hazardous materials assessment indicates that a Mixed Waste Vitrification facility will be a Low Hazard facility having minimal impacts to offsite personnel and the environment.« less

  20. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

Top