An Automatic Portable Telecine Camera.
1978-08-01
five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the
Rotatable prism for pan and tilt
NASA Technical Reports Server (NTRS)
Ball, W. B.
1980-01-01
Compact, inexpensive, motor-driven prisms change field of view of TV camera. Camera and prism rotate about lens axis to produce pan effect. Rotating prism around axis parallel to lens produces tilt. Size of drive unit and required clearance are little more than size of camera.
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
360 deg Camera Head for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.
2012-01-01
The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
[Environmental Education Units.] Photography for Kids. Vacant Lot Studies. Contour Mapping.
ERIC Educational Resources Information Center
Minneapolis Independent School District 275, Minn.
Techniques suitable for use with elementary school students when studying field environment are described in these four booklets. Techniques for photography (construction of simple cameras, printing on blueprint and photographic paper, use of simple commercial cameras, development of exposed film); for measuring microclimatic factors (temperature,…
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
2015-04-08
The target of this observation as seen by ASA Mars Reconnaissance Orbiter is a circular depression in a dark-toned unit associated with a field of cones to the northeast. At the image scale of a Context Camera image, the depression appears to expose layers especially on the sides or walls of the depression, which are overlain by dark sands presumably associated with the dark-toned unit. HiRISE resolution, which is far higher than that of the Context Camera and its larger footprint, can help identify possible layers. http://photojournal.jpl.nasa.gov/catalog/PIA19358
Wide Field Camera 3 Accommodations for HST Robotics Servicing Mission
NASA Technical Reports Server (NTRS)
Ginyard, Amani
2005-01-01
This slide presentation discusses the objectives of the Hubble Space Telescope (HST) Robotics Servicing and Deorbit Mission (HRSDM), reviews the Wide Field Camera 3 (WFC3), and also reviews the contamination accomodations for the WFC3. The objectives of the HRSDM are (1) to provide a disposal capability at the end of HST's useful life, (2) to upgrade the hardware by installing two new scientific instruments: replace the Corrective Optics Space Telescope Axial Replacement (COSTAR) with the Cosmic Origins Spectrograph (COS), and to replace the Wide Field/Planetary Camera-2 (WFPC2) with Wide Field Camera-3, and (3) Extend the Scientific life of HST for a minimum of 5 years after servicing. Included are slides showing the Hubble Robotic Vehicle (HRV) and slides describing what the HRV contains. There are also slides describing the WFC3. One of the mechanisms of the WFC3 is to serve partially as replacement gyroscopes for HST. There are also slides that discuss the contamination requirements for the Rate Sensor Units (RSUs), that are part of the Rate Gyroscope Assembly on the WFC3.
Conceptual design for an AIUC multi-purpose spectrograph camera using DMD technology
NASA Astrophysics Data System (ADS)
Rukdee, S.; Bauer, F.; Drass, H.; Vanzi, L.; Jordan, A.; Barrientos, F.
2017-02-01
Current and upcoming massive astronomical surveys are expected to discover a torrent of objects, which need groundbased follow-up observations to characterize their nature. For transient objects in particular, rapid early and efficient spectroscopic identification is needed. In particular, a small-field Integral Field Unit (IFU) would mitigate traditional slit losses and acquisition time. To this end, we present the design of a Digital Micromirror Device (DMD) multi-purpose spectrograph camera capable of running in several modes: traditional longslit, small-field patrol IFU, multi-object and full-field IFU mode via Hadamard spectra reconstruction. AIUC Optical multi-purpose CAMera (AIUCOCAM) is a low-resolution spectrograph camera of R 1,600 covering the spectral range of 0.45-0.85 μm. We employ a VPH grating as a disperser, which is removable to allow an imaging mode. This spectrograph is envisioned for use on a 1-2 m class telescope in Chile to take advantage of good site conditions. We present design decisions and challenges for a costeffective robotized spectrograph. The resulting instrument is remarkably versatile, capable of addressing a wide range of scientific topics.
Image Intensifier Modules For Use With Commercially Available Solid State Cameras
NASA Astrophysics Data System (ADS)
Murphy, Howard; Tyler, Al; Lake, Donald W.
1989-04-01
A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
The TESS camera: modeling and measurements with deep depletion devices
NASA Astrophysics Data System (ADS)
Woods, Deborah F.; Vanderspek, Roland; MacDonald, Robert; Morgan, Edward; Villasenor, Joel; Thayer, Carolyn; Burke, Barry; Chesbrough, Christian; Chrisp, Michael; Clark, Kristin; Furesz, Gabor; Gonzales, Alexandria; Nguyen, Tam; Prigozhin, Gregory; Primeau, Brian; Ricker, George; Sauerwein, Timothy; Suntharalingam, Vyshnavi
2016-07-01
The Transiting Exoplanet Survey Satellite, a NASA Explorer-class mission in development, will discover planets around nearby stars, most notably Earth-like planets with potential for follow up characterization. The all-sky survey requires a suite of four wide field-of-view cameras with sensitivity across a broad spectrum. Deep depletion CCDs with a silicon layer of 100 μm thickness serve as the camera detectors, providing enhanced performance in the red wavelengths for sensitivity to cooler stars. The performance of the camera is critical for the mission objectives, with both the optical system and the CCD detectors contributing to the realized image quality. Expectations for image quality are studied using a combination of optical ray tracing in Zemax and simulations in Matlab to account for the interaction of the incoming photons with the 100 μm silicon layer. The simulations include a probabilistic model to determine the depth of travel in the silicon before the photons are converted to photo-electrons, and a Monte Carlo approach to charge diffusion. The charge diffusion model varies with the remaining depth for the photo-electron to traverse and the strength of the intermediate electric field. The simulations are compared with laboratory measurements acquired by an engineering unit camera with the TESS optical design and deep depletion CCDs. In this paper we describe the performance simulations and the corresponding measurements taken with the engineering unit camera, and discuss where the models agree well in predicted trends and where there are differences compared to observations.
Illumination box and camera system
Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.
2002-01-01
A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.
Cryogenic solid Schmidt camera as a base for future wide-field IR systems
NASA Astrophysics Data System (ADS)
Yudin, Alexey N.
2011-11-01
Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.
SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)
NASA Astrophysics Data System (ADS)
Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter
2016-08-01
The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This has advantages in terms of space saving and improved performance. Measures are being taken to minimise the risk of damage during an instrument change. The detector is cooled by a Stirling cooler, which can be disconnected from the cooler unit without risking damage. Each telescope has a dedicated cooler unit into which the coolant hoses of WiNCam will plug. To overcome an inherent drawback of Stirling coolers, an active vibration damper is incorporated. During an instrument change, the autoguider remains on the telescope, and the filter magazines, shutter and detector package are removed as a single unit. The new alt-az telescope, manufactured by APM-Telescopes, is a 1-metre f/8 Ritchey-Chrétien with optics by LOMO. The field flattening optics were designed by Darragh O'Donoghue to have high UV throughput and uniform encircled energy over the 100mm diameter field. WiNCam will be mounted on one Nasmyth port, with the second port available for SHOC (Sutherland High-speed Optical Camera) and guest instrumentation. The telescope will be located in Sutherland, where an existing dome is being extensively renovated to accommodate it. Commissioning is planned for the second half of 2016.
A wide-angle camera module for disposable endoscopy
NASA Astrophysics Data System (ADS)
Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee
2016-08-01
A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.
Optical Meteor Systems Used by the NASA Meteoroid Environment Office
NASA Technical Reports Server (NTRS)
Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.
2015-01-01
The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.
Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA
NASA Astrophysics Data System (ADS)
Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki
2017-11-01
SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
McCurdy, Neil J.; Griswold, William G; Lenert, Leslie A.
2005-01-01
The first moments at a disater scene are chaotic. The command center initially operates with little knowledge of hazards, geography and casualties, building up knowledge of the event slowly as information trickles in by voice radio channels. RealityFlythrough is a tele-presence system that stitches together live video feeds in real-time, using the principle of visual closure, to give command center personnel the illusion of being able to explore the scene interactively by moving smoothly between the video feeds. Using RealityFlythrough, medical, fire, law enforcement, hazardous materials, and engineering experts may be able to achieve situational awareness earlier, and better manage scarce resources. The RealityFlythrough system is composed of camera units with off-the-shelf GPS and orientation systems and a server/viewing station that offers access to images collected by the camera units in real time by position/orientation. In initial field testing using an experimental mesh 802.11 wireless network, two camera unit operators were able to create an interactive image of a simulated disaster scene in about five minutes. PMID:16779092
NASA Astrophysics Data System (ADS)
Torabzadeh, Mohammad; Stockton, Patrick; Kennedy, Gordon T.; Saager, Rolf B.; Durkin, Anthony J.; Bartels, Randy A.; Tromberg, Bruce J.
2018-02-01
Hyperspectral Imaging (HSI) is a growing field in tissue optics due to its ability to collect continuous spectral features of a sample without a contact probe. Spatial Frequency Domain Imaging (SFDI) is a non-contact wide-field spectral imaging technique that is used to quantitatively characterize tissue structure and chromophore concentration. In this study, we designed a Hyperspectral SFDI (H-SFDI) instrument which integrated a supercontinuum laser source to a wavelength tuning optical configuration and a sCMOS camera to extract spatial (Field of View: 2cm×2cm) and broadband spectral features (580nm-950nm). A preliminary experiment was also performed to integrate the hyperspectral projection unit to a compressed single pixel camera and Light Labeling (LiLa) technique.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen makes adjustments on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Worthington (left) and Kenny Allen work on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen stands in the center console area of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric-drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington sits in the center console seat of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Wetherington (left) and Kenny Allen work on two of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.
STS-5 Columbia, OV-102, middeck documentation
NASA Technical Reports Server (NTRS)
1982-01-01
Items stowed temporarily on forward middeck lockers include (left to right) field sequential (FS) crew cabin camera, procedural notebook, communications kit assembly (assy) headset (HDST) interface unit (HIU), personal hygiene kit, personal hygiene mirror assy, meal tray assemblies, towels, and Vestibular Study Experiment headset and antenna.
The WFM Instrument of the LOFT mission
NASA Astrophysics Data System (ADS)
Gálvez, J. L.; Hernanz, M.; Álvarez, L.; LOFT/WFM Team
2013-05-01
LOFT, the Large Observatory For X-ray Timing, was selected by ESA in 2011 as one of the four M3 (medium class) missions concepts of the Cosmic Vision programme that will compete for a launch opportunity at the start of the 2020s. LOFT includes two instruments: the Large Area Detector (LAD), a ˜10 m^2 collimated X-ray detector in the 2-50 keV range (up to 80 keV in extended mode), and the Wide Field Monitor (WFM), a coded-mask wide field X-ray monitor based on silicon radiation detectors. We, the Institute of Space Sciences (CSIC-IEEC) in Barcelona, are deeply involved in the LOFT mission, sharing the leadership of the WFM instrument with DTU Space in Denmark. We are responsible of the mechanics of the WFM, including the structural and thermal design. The WFM baseline is a set of 4 units (each unit corresponds to 2 co-aligned cameras) arranged in arch, covering a field of view at zero response of 180°× 90°, and one more unit pointing to the anti-sun direction. The structure of each camera lies on its own coded mask of Tungsten, 150 μm thick, a collimator and the detector plane (20 cm below the mask) providing a fine (arc minutes) angular resolution. The camera detector plane (182 cm^2) will operate at -20°C in order to achieve an energy resolution FWHM of less than 500 eV in the 2-50 keV energy range. The WFM has the main scope of catching good triggering sources to be pointed with the LAD. Its large field of view will permit to observe in the same energy range of the LAD about 50% of the sky at once. The WFM is designed also to catch transient/bursting events down to a few mCrab fluxes and will provide for them data with fine spectral and timing resolution (up to 10 μsec).
Concept of electro-optical sensor module for sniper detection system
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz
2010-10-01
The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.
Winnowing the Field: Candidates, Caucuses, and Presidential Elections.
ERIC Educational Resources Information Center
Gore, Deborah, Ed.
1991-01-01
This issue features articles and activities that concern the history of the presidential election process in the United States, with a special focus on Iowa's role in that process. The following features are included: "Lights, Camera, Action!"; "Presidential Whoopla"; "From Tree Stumps to Living Rooms"; "Wild…
The guidance methodology of a new automatic guided laser theodolite system
NASA Astrophysics Data System (ADS)
Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua
2008-12-01
Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.
ERIC Educational Resources Information Center
Kuntz, Jeffrey J.; Snyder, John
2004-01-01
This article describes how one substitute teacher traveling the United States as a meet intern with USA Track and Field, a classroom teacher with an eager group of fifth graders, one stuffed Punxsy Phil groundhog, the Pennsylvania Academic Standards and a digital camera combined to form a collaborative classroom travel project entitled,…
Commander Brand shaves in front of forward middeck lockers
NASA Technical Reports Server (NTRS)
1982-01-01
Commander Brand, wearing shorts, shaves in front of forward middeck lockers using personal hygiene mirror assembly (assy). Open modular locker single tray assy, Field Sequential (FS) crew cabin camera, communications kit assy mini headset (HDST) and HDST interface unit (HIU), personal hygiene kit, and meal tray assemblies appear in view.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Smartphone Based Platform for Colorimetric Sensing of Dyes
NASA Astrophysics Data System (ADS)
Dutta, Sibasish; Nath, Pabitra
We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.
Attempt of Serendipitous Science During the Mojave Volatile Prospector Field Expedition
NASA Technical Reports Server (NTRS)
Roush, T. L.; Colaprete, A.; Heldmann, J.; Lim, D. S. S.; Cook, A.; Elphic, R.; Deans, M.; Fluckiger, L.; Fritzler, E.; Hunt, David
2015-01-01
On 23 October a partial solar eclipse occurred across parts of the southwest United States between approximately 21:09 and 23:40 (UT), with maximum obscuration, 36%, occurring at 22:29 (UT). During 21-26 October 2014 the Mojave Volatile Prospector (MVP) field expedition deployed and operated the NASA Ames Krex2 rover in the Mojave desert west of Baker, California (Fig. 1, bottom). The MVP field expedition primary goal was to characterize the surface and sub-surface soil moisture properties within desert alluvial fans, and as a secondary goal to provide mission operations simulations of the Resource Prospector (RP) mission to a Lunar pole. The partial solar eclipse provided an opportunity during MVP operations to address serendipitous science. Science instruments on Krex2 included a neutron spectrometer, a near-infrared spectrometer with associated imaging camera, and an independent camera coupled with software to characterize the surface textures of the areas encountered. All of these devices are focused upon the surface and as a result are downward looking. In addition to these science instruments, two hazard cameras are mounted on Krex2. The chief device used to monitor the partial solar eclipse was the engineering development unit of the Near-Infrared Volatile Spectrometer System (NIRVSS) near-infrared spectrometer. This device uses two separate fiber optic fed Hadamard transform spectrometers. The short-wave and long-wave spectrometers measure the 1600-2400 and 2300-3400 nm wavelength regions with resolutions of 10 and 13 nm, respectively. Data are obtained approximately every 8 seconds. The NIRVSS stares in the opposite direction as the front Krex2.
Final Technical Report. Training in Building Audit Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brosemer, Kathleen
In 2011, the Tribe proposed and was awarded the Training in Building Audit Technologies grant from the DOE in the amount of $55,748 to contract for training programs for infrared cameras, blower door technology applications and building systems. The coursework consisted of; Infrared Camera Training: Level I - Thermal Imaging for Energy Audits; Blower Door Analysis and Building-As-A-System Training, Building Performance Institute (BPI) Building Analyst; Building Envelope Training, Building Performance Institute (BPI) Envelope Professional; and Audit/JobFLEX Tablet Software. Competitive procurement of the training contractor resulted in lower costs, allowing the Tribe to request and receive DOE approval to additionally purchasemore » energy audit equipment and contract for residential energy audits of 25 low-income Tribal Housing units. Sault Tribe personnel received field training to supplement the classroom instruction on proper use of the energy audit equipment. Field experience was provided through the second DOE energy audits grant, allowing Sault Tribe personnel to join the contractor, Building Science Academy, in conducting 25 residential energy audits of low-income Tribal Housing units.« less
NASA Astrophysics Data System (ADS)
Ruggeri, Marco; Hernandez, Victor; De Freitas, Carolina; Relhan, Nidhi; Silgado, Juan; Manns, Fabrice; Parel, Jean-Marie
2016-03-01
Hand-held wide-field contact color fundus photography is currently the standard method to acquire diagnostic images of children during examination under anesthesia and in the neonatal intensive care unit. The recent development of portable non-contact hand-held OCT retinal imaging systems has proved that OCT is of tremendous help to complement fundus photography in the management of pediatric patients. Currently, there is no commercial or research system that combines color wide-field digital fundus and OCT imaging in a contact-fashion. The contact of the probe with the cornea has the advantages of reducing motion experienced by the photographer during the imaging and providing fundus and OCT images with wider field of view that includes the periphery of the retina. In this study we produce proof of concept for a contact-type hand-held unit for simultaneous color fundus and OCT live view of the retina of pediatric patients. The front piece of the hand-held unit consists of a contact ophthalmoscopy lens integrating a circular light guide that was recovered from a digital fundus camera for pediatric imaging. The custom-made rear piece consists of the optics to: 1) fold the visible aerial image of the fundus generated by the ophthalmoscopy lens on a miniaturized level board digital color camera; 2) conjugate the eye pupil to the galvanometric scanning mirrors of an OCT delivery system. Wide-field color fundus and OCT images were simultaneously obtained in an eye model and sequentially obtained on the eye of a conscious 25 year-old human subject with healthy retina.
VizieR Online Data Catalog: MOST photometry of Proxima (Kipping+, 2017)
NASA Astrophysics Data System (ADS)
Kipping, D. M.; Cameron, C.; Hartman, J. D.; Davenport, J. R. A.; Matthews, J. M.; Sasselov, D.; Rowe, J.; Siverd, R. J.; Chen, J.; Sandford, E.; Bakos, G. A.; Jordan, A.; Bayliss, D.; Henning, T.; Mancini, L.; Penev, K.; Csubry, Z.; Bhatti, W.; da Silva Bento, J.; Guenther, D. B.; Kuschnig, R.; Moffat, A. F. J.; Rucinski, S. M.; Weiss, W. W.
2017-06-01
Microwave and Oscillations of STars (MOST) telescope is a 53kg satellite in low Earth orbit with a 15cm aperture visible band camera (35-750nm). MOST observed Proxima Centauri in 2014 May (beginning on HJD(2000) 2456793.18) for about 12.5 days. MOST again observed Proxima Centauri in 2015 May (starting on HJD(2000) 2457148.54), this time for a total of 31 days. Independent of the MOST observations, Proxima Cen was also monitored by the HATSouth ground-based telescope network. The network consists of six wide-field photometric instruments located at three observatories in the Southern Hemisphere (Las Campanas Observatory [LCO] in Chile, the High Energy Stereoscopic System [HESS] site in Namibia, and Siding Spring Observatory [SSO] in Australia), with two instruments per site. Each instrument consists of four 18cm diameter astrographs and associated 4K*4K backside-illuminated CCD cameras and Sloan r-band filters, placed on a common robotic mount. The four astrographs and cameras together cover a 8.2°*8.2° mosaic field of view at a pixel scale of 3.7''/pixel. Observations of a field containing Proxima Cen were collected as part of the general HATSouth transit survey, with a total of 11071 (this number does not count observations that were rejected as not useful for high-precision photometry, or those that produced large-amplitude outliers in the Proxima Cen light curve) composite 3*80s exposures gathered between 2012 June 14 and 2014 September 20. These include 3430 observations made with the HS-2 unit at LCO, 4630 observations made with the HS-4 unit at the HESS site, and 3011 observations made with the HS-6 unit at the SSO site. Due to weather and other factors, the cadence was nonuniform. The median time difference between consecutive observations in the full time series is 368s. (2 data files).
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-02-01
Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.
3D imaging and wavefront sensing with a plenoptic objective
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.
2011-06-01
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.
QuadCam - A Quadruple Polarimetric Camera for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Skuljan, J.
A specialised quadruple polarimetric camera for space situational awareness, QuadCam, has been built at the Defence Technology Agency (DTA), New Zealand, as part of collaboration with the Defence Science and Technology Laboratory (Dstl), United Kingdom. The design was based on a similar system originally developed at Dstl, with some significant modifications for improved performance. The system is made up of four identical CCD cameras looking in the same direction, but in a different plane of polarisation at 0, 45, 90 and 135 degrees with respect to the reference plane. A standard set of Stokes parameters can be derived from the four images in order to describe the state of polarisation of an object captured in the field of view. The modified design of the DTA QuadCam makes use of four small Raspberry Pi computers, so that each camera is controlled by its own computer in order to speed up the readout process and ensure that the four individual frames are taken simultaneously (to within 100-200 microseconds). In addition, a new firmware was requested from the camera manufacturer so that an output signal is generated to indicate the state of the camera shutter. A specialised GPS unit (also developed at DTA) is then used to monitor the shutter signals from the four cameras and record the actual time of exposure to an accuracy of about 100 microseconds. This makes the system well suited for the observation of fast-moving objects in the low Earth orbit (LEO). The QuadCam is currently mounted on a Paramount MEII robotic telescope mount at the newly built DTA space situational awareness observatory located on Whangaparaoa Peninsula near Auckland, New Zealand. The system will be used for tracking satellites in low Earth orbit and geostationary belt as well. The performance of the camera has been evaluated and a series of test images have been collected in order to derive the polarimetric signatures for selected satellites.
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.
2010-01-01
In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.
Habitat Demonstration Unit Medical Operations Workstation Upgrades
NASA Technical Reports Server (NTRS)
Trageser, Katherine H.
2011-01-01
This paper provides an overview of the design and fabrication associated with upgrades for the Medical Operations Workstation in the Habitat Demonstration Unit. The work spanned a ten week period. The upgrades will be used during the 2011 Desert Research and Technology Studies (Desert RATS) field campaign. Upgrades include a deployable privacy curtain system, a deployable tray table, an easily accessible biological waste container, reorganization and labeling of the medical supplies, and installation of a retractable camera. All of the items were completed within the ten week period.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
Performances Of The New Streak Camera TSN 506
NASA Astrophysics Data System (ADS)
Nodenot, P.; Imhoff, C.; Bouchu, M.; Cavailler, C.; Fleurot, N.; Launspach, J.
1985-02-01
The number of streack cameras used in research laboratory has been continuously increased du-ring the past years. The increasing of this type of equipment is due to the development of various measurement techniques in the nanosecond and picosecond range. Among the many different applications, we would mention detonics chronometry measurement, measurement of the speed of matter by means of Doppler-laser interferometry, laser and plasma diagnostics associated with laser-matter interaction. The old range of cameras have been remodelled, in order to standardize and rationalize the production of ultrafast cinematography instruments, to produce a single camera known as TSN 506. Tne TSN 506 is composed of an electronic control unit, built around the image converter tube it can be fitted with a nanosecond sweep circuit covering the whole range from 1 ms to 200 ns or with a picosecond circuit providing streak durations from 1 to 100 ns. We shall describe the main electronic and opto-electronic performance of the TSN 506 operating in these two temporal fields.
Control system for several rotating mirror camera synchronization operation
NASA Astrophysics Data System (ADS)
Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji
1997-05-01
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.
Optical design of the SuMIRe/PFS spectrograph
NASA Astrophysics Data System (ADS)
Pascal, Sandrine; Vives, Sébastien; Barkhouser, Robert; Gunn, James E.
2014-07-01
The SuMIRe Prime Focus Spectrograph (PFS), developed for the 8-m class SUBARU telescope, will consist of four identical spectrographs, each receiving 600 fibers from a 2394 fiber robotic positioner at the telescope prime focus. Each spectrograph includes three spectral channels to cover the wavelength range [0.38-1.26] um with a resolving power ranging between 2000 and 4000. A medium resolution mode is also implemented to reach a resolving power of 5000 at 0.8 um. Each spectrograph is made of 4 optical units: the entrance unit which produces three corrected collimated beams and three camera units (one per spectral channel: "blue, "red", and "NIR"). The beam is split by using two large dichroics; and in each arm, the light is dispersed by large VPH gratings (about 280x280mm). The proposed optical design was optimized to achieve the requested image quality while simplifying the manufacturing of the whole optical system. The camera design consists in an innovative Schmidt camera observing a large field-of-view (10 degrees) with a very fast beam (F/1.09). To achieve such a performance, the classical spherical mirror is replaced by a catadioptric mirror (i.e meniscus lens with a reflective surface on the rear side of the glass, like a Mangin mirror). This article focuses on the optical architecture of the PFS spectrograph and the perfornance achieved. We will first described the global optical design of the spectrograph. Then, we will focus on the Mangin-Schmidt camera design. The analysis of the optical performance and the results obtained are presented in the last section.
Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai
2014-01-01
Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350
Telephoto lens view of Silver Spur in the Hadley Delta region from Apollo 15
NASA Technical Reports Server (NTRS)
1971-01-01
A telephoto lens view of the prominent feature called Silver Spur in the Hadley Delta region, photographed during the Apollo 15 lunar surface extravehicular activity at the Hadley-Apennine landing site. The distance from the camera to the spur is about 10 miles. The field of view across the bottom is about one mile. Structural formations in the mountain are clearly visible. There are two major units. The upper unit is characterized by massive subunits, each one of which is approximately 200 feet deep. The lower major unit is characterized by thinner bedding and cross bedding.
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. From the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH; the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera; and the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. At the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. From the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
Pi of the Sky full system and the new telescope
NASA Astrophysics Data System (ADS)
Mankiewicz, L.; Batsch, T.; Castro-Tirado, A.; Czyrkowski, H.; Cwiek, A.; Cwiok, M.; Dabrowski, R.; Jelínek, M.; Kasprowicz, G.; Majcher, A.; Majczyna, A.; Malek, K.; Nawrocki, K.; Obara, L.; Opiela, R.; Piotrowski, L. W.; Siudek, M.; Sokolowski, M.; Wawrzaszek, R.; Wrochna, G.; Zaremba, M.; Żarnecki, A. F.
2014-12-01
The Pi of the Sky is a system of wide field of view robotic telescopes, which search for short timescale astrophysical phenomena, especially for prompt optical GRB emission. The system was designed for autonomous operation, monitoring a large fraction of the sky to a depth of 12(m}-13({m)) and with time resolution of the order of 1 - 10 seconds. The system design and observation strategy were successfully tested with a prototype detector operational at Las Campanas Observatory, Chile from 2004-2009 and moved to San Pedro de Atacama Observatory in March 2011. In October 2010 the first unit of the final Pi of the Sky detector system, with 4 CCD cameras, was successfully installed at the INTA El Arenosillo Test Centre in Spain. In July 2013 three more units (12 CCD cameras) were commissioned and installed, together with the first one, on a new platform in INTA, extending sky coverage to about 6000 square degrees.
Automated Meteor Fluxes with a Wide-Field Meteor Camera Network
NASA Technical Reports Server (NTRS)
Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.
2013-01-01
Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also computes the meteor mass from the photometry, a mass flux can be also calculated. Weather conditions in the southeastern United States are seldom ideal, which introduces the difficulty of a variable sky background. First a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes vs camera intensities. The stellar limiting magnitude is derived from this and easily converted to a limiting meteor magnitude for the active shower or sporadic source.
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
Study of Permanent Magnet Focusing for Astronomical Camera Tubes
NASA Technical Reports Server (NTRS)
Long, D. C.; Lowrance, J. L.
1975-01-01
A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the point spread function (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.
Lessons Learned from the Wide Field Camera 3 TV1 Test Campaign and Correlation Effort
NASA Technical Reports Server (NTRS)
Peabody, Hume; Stavley, Richard; Bast, William
2007-01-01
In January 2004, shortly after the Columbia accident, future servicing missions to the Hubble Space Telescope (HST) were cancelled. In response to this, further work on the Wide Field Camera 3 instrument was ceased. Given the maturity level of the design, a characterization thermal test (TV1) was completed in case the mission was re-instated or an alternate mission found on which to fly the instrument. This thermal test yielded some valuable lessons learned with respect to testing configurations and modeling/correlation practices, including: 1. Ensure that the thermal design can be tested 2. Ensure that the model has sufficient detail for accurate predictions 3. Ensure that the power associated with all active control devices is predicted 4. Avoid unit changes for existing models. This paper documents the difficulties presented when these recommendations were not followed.
Two-Camera Acquisition and Tracking of a Flying Target
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter
2008-01-01
A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.
Analysis of calibration accuracy of cameras with different target sizes for large field of view
NASA Astrophysics Data System (ADS)
Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan
2018-03-01
Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.
Comparison of parameters of modern cooled and uncooled thermal cameras
NASA Astrophysics Data System (ADS)
Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał
2017-10-01
During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Graphic Arts: Book Two. Process Camera, Stripping, and Platemaking.
ERIC Educational Resources Information Center
Farajollahi, Karim; And Others
The second of a three-volume set of instructional materials for a course in graphic arts, this manual consists of 10 instructional units dealing with the process camera, stripping, and platemaking. Covered in the individual units are the process camera and darkroom photography, line photography, half-tone photography, other darkroom techniques,…
Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.
2016-01-01
ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
SeaVipers- Computer Vision and Inertial Position/Reference Sensor System (CVIPRSS)
2015-08-01
uses an Inertial Measurement Unit (IMU) to detect changes in roll , pitch, and yaw (x-, y-, and z-axis movement). We use a 9DOF Razor IMU from SparkFun... inertial measurement unit (IMU) and cameras that are hardware synchronized to provide close coupling. Several fast food companies, Internet giants like...light cameras [32]. 4.1.4 Inertial Measurement Unit To assist the PTU in video stabilization for the camera and aiming the rangefinder, Sea- Vipers
History of the formerly top secret KH-9 Hexagon spy satellite
NASA Astrophysics Data System (ADS)
Pressel, Phil
2014-12-01
This paper is about the development, design, fabrication and use of the KH-9 Hexagon spy in the sky satellite camera system that was finally declassified by the National Reconnaissance Office on September 17, 2011 twenty five years after the program ended. It was the last film based reconnaissance camera and was known by experts in the field as "the most complicated system ever put up in orbit." It provided important intelligence for the United States government and was the reason that President Nixon was able to sign the SALT treaty, and when President Reagan said "Trust but Verify" it provided the means of verification. Each satellite weighed 30,000 pounds and carried two cameras thereby permitting photographs of the entire landmass of the earth to be taken in stereo. Each camera carried up to 30 miles of film for a total of 60 miles of film. Ultra-complex mechanisms controlled the structurally "wimpy" film that traveled at speeds up to 204 inches per second at the focal plane and was perfectly synchronized to the optical image.
Maurice, S.; Wiens, R.C.; Saccoccio, M.; Barraclough, B.; Gasnault, O.; Forni, O.; Mangold, N.; Baratoux, D.; Bender, S.; Berger, G.; Bernardin, J.; Berthé, M.; Bridges, N.; Blaney, D.; Bouyé, M.; Caïs, P.; Clark, B.; Clegg, S.; Cousin, A.; Cremers, D.; Cros, A.; DeFlores, L.; Derycke, C.; Dingler, B.; Dromart, G.; Dubois, B.; Dupieux, M.; Durand, E.; d'Uston, L.; Fabre, C.; Faure, B.; Gaboriaud, A.; Gharsa, T.; Herkenhoff, K.; Kan, E.; Kirkland, L.; Kouach, D.; Lacour, J.-L.; Langevin, Y.; Lasue, J.; Le Mouélic, S.; Lescure, M.; Lewin, E.; Limonadi, D.; Manhès, G.; Mauchien, P.; McKay, C.; Meslin, P.-Y.; Michel, Y.; Miller, E.; Newsom, Horton E.; Orttner, G.; Paillet, A.; Parès, L.; Parot, Y.; Pérez, R.; Pinet, P.; Poitrasson, F.; Quertier, B.; Sallé, B.; Sotin, Christophe; Sautter, V.; Séran, H.; Simmonds, J.J.; Sirven, J.-B.; Stiglich, R.; Striebig, N.; Thocaven, J.-J.; Toplis, M.J.; Vaniman, D.
2012-01-01
ChemCam is a remote sensing instrument suite on board the "Curiosity" rover (NASA) that uses Laser-Induced Breakdown Spectroscopy (LIBS) to provide the elemental composition of soils and rocks at the surface of Mars from a distance of 1.3 to 7 m, and a telescopic imager to return high resolution context and micro-images at distances greater than 1.16 m. We describe five analytical capabilities: rock classification, quantitative composition, depth profiling, context imaging, and passive spectroscopy. They serve as a toolbox to address most of the science questions at Gale crater. ChemCam consists of a Mast-Unit (laser, telescope, camera, and electronics) and a Body-Unit (spectrometers, digital processing unit, and optical demultiplexer), which are connected by an optical fiber and an electrical interface. We then report on the development, integration, and testing of the Mast-Unit, and summarize some key characteristics of ChemCam. This confirmed that nominal or better than nominal performances were achieved for critical parameters, in particular power density (>1 GW/cm2). The analysis spot diameter varies from 350 μm at 2 m to 550 μm at 7 m distance. For remote imaging, the camera field of view is 20 mrad for 1024×1024 pixels. Field tests demonstrated that the resolution (˜90 μrad) made it possible to identify laser shots on a wide variety of images. This is sufficient for visualizing laser shot pits and textures of rocks and soils. An auto-exposure capability optimizes the dynamical range of the images. Dedicated hardware and software focus the telescope, with precision that is appropriate for the LIBS and imaging depths-of-field. The light emitted by the plasma is collected and sent to the Body-Unit via a 6 m optical fiber. The companion to this paper (Wiens et al. this issue) reports on the development of the Body-Unit, on the analysis of the emitted light, and on the good match between instrument performance and science specifications.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
MEGARA: the new multi-object and integral field spectrograph for GTC
NASA Astrophysics Data System (ADS)
Carrasco, E.; Páez, G.; Izazaga-Pére, R.; Gil de Paz, A.; Gallego, J.; Iglesias-Páramo, J.
2017-07-01
MEGARA is an optical integral-field unit and multi-object spectrograph for the 10.4m Gran Telescopio Canarias. Both observational modes will provide identical spectral resolutions Rfwhm ˜ 6,000, 12,000 and 18,700. The spectrograph is a collimator-camera system. The unique characteristics of MEGARA in terms of throughput and versatility make this instrument the most efficient tool to date to analyze astrophysical objects at intermediate spectral resolutions. The instrument is currently at the telescope for on-sky commissioning. Here we describe the as-built main characteristics the instrument.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.
Hand portable thin-layer chromatography system
Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.
2000-01-01
A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.
Do you think you have what it takes to set up a long-term video monitoring unit?
Smith, Sheila L
2006-03-01
The single most important factor when setting up a long-term video monitoring unit is research. Research all vendors by traveling to other sites and calling other facilities. Considerations with equipment include the server, acquisition units, review units, cameras, software, and monitors as well as other factors including Health Insurance Portability and Accountability Act (HIPAA) compliance. Research customer support including both field and telephone support. Involve your Clinical Engineering Department in your investigations. Be sure to obtain warranty information. Researching placement of the equipment is essential. Communication with numerous groups is vital. Administration, engineers, clinical engineering, physicians, infection control, environmental services, house supervisors, security, and all involved parties should be involved in the planning.
MuSICa at GRIS: a prototype image slicer for EST at GREGOR
NASA Astrophysics Data System (ADS)
Calcines, A.; Collados, M.; López, R. L.
2013-05-01
This communication presents a prototype image slicer for the 4-m European Solar Telescope (EST) designed for the spectrograph of the 1.5-m GREGOR solar telescope (GRIS). The design of this integral field unit has been called MuSICa (Multi-Slit Image slicer based on collimator-Camera). It is a telecentric system developed specifically for the integral field, high resolution spectrograph of EST and presents multi-slit capability, reorganizing a bidimensional field of view of 80 arcsec^{2} into 8 slits, each one of them with 200 arcsec length × 0.05 arcsec width. It minimizes the number of optical components needed to fulfil this multi-slit capability, three arrays of mirrors: slicer, collimator and camera mirror arrays (the first one flat and the other two spherical). The symmetry of the layout makes it possible to overlap the pupil images associated to each part of the sliced entrance field of view. A mask with only one circular aperture is placed at the pupil position. This symmetric characteristic offers some advantages: facilitates the manufacturing process, the alignment and reduces the costs. In addition, it is compatible with two modes of operation: spectroscopic and spectro-polarimetric, offering a great versatility. The optical quality of the system is diffraction-limited. The prototype will improve the performances of GRIS at GREGOR and is part of the feasibility study of the integral field unit for the spectrographs of EST. Although MuSICa has been designed as a solar image slicer, its concept can also be applied to night-time astronomical instruments (Collados et al. 2010, Proc. SPIE, Vol. 7733, 77330H; Collados et al. 2012, AN, 333, 901; Calcines et al. 2010, Proc. SPIE, Vol. 7735, 77351X)
Telephoto lens view of Silver Spur in the Hadley Delta region from Apollo 15
1971-07-31
AS15-84-11250 (31 July-2 Aug. 1971) --- A telephoto lens view of the prominent feature called Silver Spur in the Hadley Delta region, photographed during the Apollo 15 lunar surface extravehicular activity (EVA) at the Hadley-Apennine landing site. The distance from the camera to the spur is about 10 miles. The field of view across the bottom is about one mile. Structural formations in the mountain are clearly visible. There are two major units. The upper unit is characterized by massive subunits, each one of which is approximately 200 feet deep. The lower major unit is characterized by thinner bedding and cross bedding.
The Wide Field Imager instrument for Athena
NASA Astrophysics Data System (ADS)
Meidinger, Norbert; Barbera, Marco; Emberger, Valentin; Fürmetz, Maria; Manhart, Markus; Müller-Seidlitz, Johannes; Nandra, Kirpal; Plattner, Markus; Rau, Arne; Treberspurg, Wolfgang
2017-08-01
ESA's next large X-ray mission ATHENA is designed to address the Cosmic Vision science theme 'The Hot and Energetic Universe'. It will provide answers to the two key astrophysical questions how does ordinary matter assemble into the large-scale structures we see today and how do black holes grow and shape the Universe. The ATHENA spacecraft will be equipped with two focal plane cameras, a Wide Field Imager (WFI) and an X-ray Integral Field Unit (X-IFU). The WFI instrument is optimized for state-of-the-art resolution spectroscopy over a large field of view of 40 amin x 40 amin and high count rates up to and beyond 1 Crab source intensity. The cryogenic X-IFU camera is designed for high-spectral resolution imaging. Both cameras share alternately a mirror system based on silicon pore optics with a focal length of 12 m and large effective area of about 2 m2 at an energy of 1 keV. Although the mission is still in phase A, i.e. studying the feasibility and developing the necessary technology, the definition and development of the instrumentation made already significant progress. The herein described WFI focal plane camera covers the energy band from 0.2 keV to 15 keV with 450 μm thick fully depleted back-illuminated silicon active pixel sensors of DEPFET type. The spatial resolution will be provided by one million pixels, each with a size of 130 μm x 130 μm. The time resolution requirement for the WFI large detector array is 5 ms and for the WFI fast detector 80 μs. The large effective area of the mirror system will be completed by a high quantum efficiency above 90% for medium and higher energies. The status of the various WFI subsystems to achieve this performance will be described and recent changes will be explained here.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Menéndez, Cammie Chaumont; Amandus, Harlan; Damadi, Parisa; Wu, Nan; Konda, Srinivas; Hendricks, Scott
2014-05-01
Driving a taxicab remains one of the most dangerous occupations in the United States, with leading homicide rates. Although safety equipment designed to reduce robberies exists, it is not clear what effect it has on reducing taxicab driver homicides. Taxicab driver homicide crime reports for 1996 through 2010 were collected from 20 of the largest cities (>200,000) in the United States: 7 cities with cameras installed in cabs, 6 cities with partitions installed, and 7 cities with neither cameras nor partitions. Poisson regression modeling using generalized estimating equations provided city taxicab driver homicide rates while accounting for serial correlation and clustering of data within cities. Two separate models were constructed to compare (1) cities with cameras installed in taxicabs versus cities with neither cameras nor partitions and (2) cities with partitions installed in taxicabs versus cities with neither cameras nor partitions. Cities with cameras installed in cabs experienced a significant reduction in homicides after cameras were installed (adjRR = 0.11, CL 0.06-0.24) and compared to cities with neither cameras nor partitions (adjRR = 0.32, CL 0.15-0.67). Cities with partitions installed in taxicabs experienced a reduction in homicides (adjRR = 0.78, CL 0.41-1.47) compared to cities with neither cameras nor partitions, but it was not statistically significant. The findings suggest cameras installed in taxicabs are highly effective in reducing homicides among taxicab drivers. Although not statistically significant, the findings suggest partitions installed in taxicabs may be effective.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
Investigation of the flow structure in thin polymer films using 3D µPTV enhanced by GPU
NASA Astrophysics Data System (ADS)
Cavadini, Philipp; Weinhold, Hannes; Tönsmann, Max; Chilingaryan, Suren; Kopmann, Andreas; Lewkowicz, Alexander; Miao, Chuan; Scharfer, Philip; Schabel, Wilhelm
2018-04-01
To understand the effects of inhomogeneous drying on the quality of polymer coatings, an experimental setup to resolve the occurring flow field throughout the drying film has been developed. Deconvolution microscopy is used to analyze the flow field in 3D and time. Since the dimension of the spatial component in the direction of the line-of-sight is limited compared to the lateral components, a multi-focal approach is used. Here, the beam of light is equally distributed on up to five cameras using cubic beam splitters. Adding a meniscus lens between each pair of camera and beam splitter and setting different distances between each camera and its meniscus lens creates multi-focality and allows one to increase the depth of the observed volume. Resolving the spatial component in the line-of-sight direction is based on analyzing the point spread function. The analysis of the PSF is computational expensive and introduces a high complexity compared to traditional particle image velocimetry approaches. A new algorithm tailored to the parallel computing architecture of recent graphics processing units has been developed. The algorithm is able to process typical images in less than a second and has further potential to realize online analysis in the future. As a prove of principle, the flow fields occurring in thin polymer solutions drying at ambient conditions and at boundary conditions that force inhomogeneous drying are presented.
Constrained optimization for position calibration of an NMR field camera.
Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke
2018-07-01
Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras
2017-10-01
ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT
NASA Astrophysics Data System (ADS)
Wojciechowski, Adam M.; Karadas, Mürsel; Huck, Alexander; Osterkamp, Christian; Jankuhn, Steffen; Meijer, Jan; Jelezko, Fedor; Andersen, Ulrik L.
2018-03-01
Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10-2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons that a camera can record in a given time. Several types of camera sensors are analyzed, and the smallest measurable magnetic field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow achieving nanotesla-level sensitivity in 1 s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that paves the way for real-time, wide-field magnetometry at the nanotesla level and with a micrometer resolution.
ERIC Educational Resources Information Center
Squibb, Matt
2009-01-01
This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)
Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio
2010-01-01
This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
VizieR Online Data Catalog: Isaac Newton Telescope Wide Field Survey (CASU 2002)
NASA Astrophysics Data System (ADS)
Cambridge Astronomical Survey Unit
2002-04-01
The INT Wide Field Survey (WFS) is using the Wide Field Camera (~0.3 square degrees) on the 2.5m Isaac Newton Telescope (INT). The project was initiated in August 1998 and is expected to have a duration of up to five years. Multicolour data will be obtained over 200+ square degrees to a typical depth of ~25 mag (u' through z'). The data is publically accessible via the Cambridge Astronomical Survey Unit to UK and NL communities from day one, with access to the rest of the world after one year. This observation log lists all observations older than the one year proprietary period. (1 data file).
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Innovation in robotic surgery: the Indian scenario.
Deshpande, Suresh V
2015-01-01
Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.
Autonomous microsystems for ground observation (AMIGO)
NASA Astrophysics Data System (ADS)
Laou, Philips
2005-05-01
This paper reports the development of a prototype autonomous surveillance microsystem AMIGO that can be used for remote surveillance. Each AMIGO unit is equipped with various sensors and electronics. These include passive infrared motion sensor, acoustic sensor, uncooled IR camera, electronic compass, global positioning system (GPS), and spread spectrum wireless transceiver. The AMIGO unit was configured to multipoint (AMIGO units) to point (base station) communication mode. In addition, field trials were conducted with AMIGO in various scenarios. These scenarios include personnel and vehicle intrusion detection (motion or sound) and target imaging; determination of target GPS position by triangulation; GPS position real time tracking; entrance event counting; indoor surveillance; and aerial surveillance on a radio controlled model plane. The architecture and test results of AMIGO will be presented.
Single lens 3D-camera with extended depth-of-field
NASA Astrophysics Data System (ADS)
Perwaß, Christian; Wietzke, Lennart
2012-03-01
Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.
Wide-field fluorescent microscopy on a cell-phone.
Zhu, Hongying; Yaglidere, Oguzhan; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2011-01-01
We demonstrate wide-field fluorescent imaging on a cell-phone, using compact and cost-effective optical components that are mechanically attached to the existing camera unit of the cell-phone. Battery powered light-emitting diodes (LEDs) are used to side-pump the sample of interest using butt-coupling. The pump light is guided within the sample cuvette to excite the specimen uniformly. The fluorescent emission from the sample is then imaged with an additional lens that is put in front of the existing lens of the cell-phone camera. Because the excitation occurs through guided waves that propagate perpendicular to the detection path, an inexpensive plastic color filter is sufficient to create the dark-field background needed for fluorescent imaging. The imaging performance of this light-weight platform (~28 grams) is characterized with red and green fluorescent microbeads, achieving an imaging field-of-view of ~81 mm(2) and a spatial resolution of ~10 μm, which is enhanced through digital processing of the captured cell-phone images using compressive sampling based sparse signal recovery. We demonstrate the performance of this cell-phone fluorescent microscope by imaging labeled white-blood cells separated from whole blood samples as well as water-borne pathogenic protozoan parasites such as Giardia Lamblia cysts.
Statis omnidirectional stereoscopic display system
NASA Astrophysics Data System (ADS)
Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.
1999-11-01
A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
NASA Astrophysics Data System (ADS)
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo
2018-06-01
We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.
The Subject Headings of the Morris Swett Library, USAFAS. Revised.
1980-05-15
Royal Armoured Corps. x Armored force. Armored troops. Armored units. Mechanized force. Mechanized units. Mechanized warfare. Tank companies. Tank...g., U. S. Ary- Physical training. CAMERA MOUNTS. CAMERAS, AERIAL. II I • • ! I CAMOUFLAGE. (U 166.3h) x Air arm - amouflage. - Bibliography. - Drape
Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0
NASA Technical Reports Server (NTRS)
Wright, Theodore W.
2016-01-01
A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.
ORAC-DR: A generic data reduction pipeline infrastructure
NASA Astrophysics Data System (ADS)
Jenness, Tim; Economou, Frossie
2015-03-01
ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays and continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.
Minamisawa, T; Hirokaga, K
1996-06-01
The open field activity of first generation (F1) hybrid male C57BL/6 x C3H mice irradiated with gamma-rays on the 14th day of gestation was studied at the following ages: 6-7 months, 12-13 months and 19-20 months. Doses were 0.1 Gy or 0.2 Gy. Open field activity was recorded with a camera. The camera output signal was recorded every sec through an A/D converter to a personal computer. The field was divided into 25 units of 8 cm square. All recordings were continuous for 60 min. The time which the 0.2-Gy group recorded at 6-7 months, spent in the 4 squares in the corner fields was high in comparison with the control group at the same age. The walking distance of the 0.1-Gy group recorded at 12-13 months was longer than that for the age matched control group. No effect of radiation was found on any of the behaviors observed and recorded at 19-20 months. The results demonstrate that exposure to low levels of gamma-rays on the 14th day of gestation results in behavioral changes, which occur at 6-7 and 12-13 months but not 19-20 months.
Quasi-microscope concept for planetary missions.
Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D
1977-09-01
Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.
Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST
1993-12-06
S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera
NASA Technical Reports Server (NTRS)
Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.
1988-01-01
The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.
2016-03-07
Peering deep into the early Universe, this picturesque parallel field observation from the NASA/ESA Hubble Space Telescope reveals thousands of colourful galaxies swimming in the inky blackness of space. A few foreground stars from our own galaxy, the Milky Way, are also visible. In October 2013 Hubble’s Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) began observing this portion of sky as part of the Frontier Fields programme. This spectacular skyscape was captured during the study of the giant galaxy cluster Abell 2744, otherwise known as Pandora’s Box. While one of Hubble’s cameras concentrated on Abell 2744, the other camera viewed this adjacent patch of sky near to the cluster. Containing countless galaxies of various ages, shapes and sizes, this parallel field observation is nearly as deep as the Hubble Ultra-Deep Field. In addition to showcasing the stunning beauty of the deep Universe in incredible detail, this parallel field — when compared to other deep fields — will help astronomers understand how similar the Universe looks in different directions
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
On HMI's Mod-L Sequence: Test and Evaluation
NASA Astrophysics Data System (ADS)
Liu, Yang; Baldner, Charles; Bogart, R. S.; Bush, R.; Couvidat, S.; Duvall, Thomas L.; Hoeksema, Jon Todd; Norton, Aimee Ann; Scherrer, Philip H.; Schou, Jesper
2016-05-01
HMI Mod-L sequence can produce full Stokes parameters at a cadence of 90 seconds by combining filtergrams from both cameras, the front camera and the side camera. Within the 90-second, the front camera takes two sets of Left and Right Circular Polarizations (LCP and RCP) at 6 wavelengths; the side camera takes one set of Linear Polarizations (I+/-Q and I+/-U) at 6 wavelengths. By combining two cameras, one can obtain full Stokes parameters of [I,Q,U,V] at 6 wavelengths in 90 seconds. In norminal Mod-C sequence that HMI currently uses, the front camera takes LCP and RCP at a cadence of 45 seconds, while the side camera takes observation of the full Stokes at a cadence of 135 seconds. Mod-L should be better than Mod-C for providing vector magnetic field data because (1) Mod-L increases cadence of full Stokes observation, which leads to higher temporal resolution of vector magnetic field measurement; (2) decreases noise in vector magnetic field data because it uses more filtergrams to produce [I,Q,U,V]. There are two potential issues in Mod-L that need to be addressed: (1) scaling intensity of the two cameras’ filtergrams; and (2) if current polarization calibration model, which is built for each camera separately, works for the combined data from both cameras. This presentation will address these questions, and further place a discussion here.
Viking lander camera radiometry calibration report, volume 2
NASA Technical Reports Server (NTRS)
Wolf, M. R.; Atwood, D. L.; Morrill, M. E.
1977-01-01
The requirements the performance validation, and interfaces for the RADCAM program, to convert Viking lander camera image data to radiometric units were established. A proposed algorithm is described, and an appendix summarizing the planned reduction of camera test data was included.
Smart-Phone Based Magnetic Levitation for Measuring Densities
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615
Smart-Phone Based Magnetic Levitation for Measuring Densities.
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
NASA Astrophysics Data System (ADS)
Bellini, A.; Bedin, L. R.
2010-07-01
High precision astrometry requires an accurate geometric-distortion solution. In this work, we present an average correction for the blue camera of the Large Binocular Telescope which enables a relative astrometric precision of ~15 mas for the BBessel and VBessel broad-band filters. The result of this effort is used in two companion papers: the first to measure the absolute proper motion of the open cluster M 67 with respect to the background galaxies; the second to decontaminate the color-magnitude of M 67 from field objects, enabling the study of the end of its white dwarf cooling sequence. Many other applications might find this distortion correction useful. Based on data acquired using the Large Binocular Telescope (LBT) at Mt. Graham, Arizona, under the Commissioning of the Large Binocular Blue Camera. The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia.Visiting Ph.D. Student at STScI under the “2008 graduate research assistantship” program.
Jolliff, B.; Knoll, A.; Morris, R.V.; Moersch, J.; McSween, H.; Gilmore, M.; Arvidson, R.; Greeley, R.; Herkenhoff, K.; Squyres, S.
2002-01-01
Blind field tests of the Field Integration Design and Operations (FIDO) prototype Mars rover were carried out 7-16 May 2000. A Core Operations Team (COT), sequestered at the Jet Propulsion Laboratory without knowledge of test site location, prepared command sequences and interpreted data acquired by the rover. Instrument sensors included a stereo panoramic camera, navigational and hazard-avoidance cameras, a color microscopic imager, an infrared point spectrometer, and a rock coring drill. The COT designed command sequences, which were relayed by satellite uplink to the rover, and evaluated instrument data. Using aerial photos and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) data, and information from the rover sensors, the COT inferred the geology of the landing site during the 18 sol mission, including lithologic diversity, stratigraphic relationships, environments of deposition, and weathering characteristics. Prominent lithologic units were interpreted to be dolomite-bearing rocks, kaolinite-bearing altered felsic volcanic materials, and basalt. The color panoramic camera revealed sedimentary layering and rock textures, and geologic relationships seen in rock exposures. The infrared point spectrometer permitted identification of prominent carbonate and kaolinite spectral features and permitted correlations to outcrops that could not be reached by the rover. The color microscopic imager revealed fine-scale rock textures, soil components, and results of coring experiments. Test results show that close-up interrogation of rocks is essential to investigations of geologic environments and that observations must include scales ranging from individual boulders and outcrops (microscopic, macroscopic) to orbital remote sensing, with sufficient intermediate steps (descent images) to connect in situ and remote observations.
NASA Astrophysics Data System (ADS)
Lussem, U.; Hollberg, J.; Menne, J.; Schellberg, J.; Bareth, G.
2017-08-01
Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
Mechatronic design of a fully integrated camera for mini-invasive surgery.
Zazzarini, C C; Patete, P; Baroni, G; Cerveri, P
2013-06-01
This paper describes the design features of an innovative fully integrated camera candidate for mini-invasive abdominal surgery with single port or transluminal access. The apparatus includes a CMOS imaging sensor, a light-emitting diode (LED)-based unit for scene illumination, a photodiode for luminance detection, an optical system designed according to the mechanical compensation paradigm, an actuation unit for enabling autofocus and optical zoom, and a control logics based on microcontroller. The bulk of the apparatus is characterized by a tubular shape with a diameter of 10 mm and a length of 35 mm. The optical system, composed of four lens groups, of which two are mobile, has a total length of 13.46 mm and an effective focal length ranging from 1.61 to 4.44 mm with a zoom factor of 2.75×, with a corresponding angular field of view ranging from 16° to 40°. The mechatronics unit, devoted to move the zoom and the focus lens groups, is implemented adopting miniature piezoelectric motors. The control logics implements a closed-loop mechanism, between the LEDs and photodiode, to attain automatic control light. Bottlenecks of the design and some potential issues of the realization are discussed. A potential clinical scenario is introduced.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
Chen, Brian R; Poon, Emily; Alam, Murad
2018-01-01
Lighting is an important component of consistent, high-quality dermatologic photography. There are different types of lighting solutions available. To evaluate currently available lighting equipment and methods suitable for procedural dermatology. Overhead lighting, built-in camera flashes, external flash units, studio strobes, and light-emitting diode (LED) light panels were evaluated with regard to their utility for dermatologic surgeons. A set of ideal lighting characteristics was used to examine the capabilities and limitations of each type of lighting solution. Recommendations regarding lighting solutions and optimal usage configurations were made in terms of the context of the clinical environment and the purpose of the image. Overhead lighting may be a convenient option for general documentation. An on-camera lighting solution using a built-in camera flash or a camera-mounted external flash unit provides portability and consistent lighting with minimal training. An off-camera lighting solution with studio strobes, external flash units, or LED light panels provides versatility and even lighting with minimal shadows and glare. The selection of an optimal lighting solution is contingent on practical considerations and the purpose of the image.
Focusing and depth of field in photography: application in dermatology practice.
Taheri, Arash; Yentzer, Brad A; Feldman, Steven R
2013-11-01
Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Development of a slicer integral field unit for the existing optical spectrograph FOCAS: progress
NASA Astrophysics Data System (ADS)
Ozaki, Shinobu; Tanaka, Yoko; Hattori, Takashi; Mitsui, Kenji; Fukushima, Mitsuhiro; Okada, Norio; Obuchi, Yoshiyuki; Tsuzuki, Toshihiro; Miyazaki, Satoshi; Yamashita, Takuya
2014-07-01
We are developing an integral field unit (IFU) with an image slicer for the existing optical spectrograph, Faint Object Camera And Spectrograph (FOCAS), on the Subaru Telescope. The slice width is 0.43 arcsec, the slice number is 23, and the field of view is 13.5 × 9.89 arcsec2. Sky spectrum separated by about 5.7 arcmin from an object field can be simultaneously obtained, which allows us precise background subtraction. Slice mirrors, pupil mirrors and slit mirrors are all glass, and their mirror surfaces are fabricated by polishing. Our IFU is about 200 mm × 300 mm × 80 mm in size and 1 kg in weight. It is installed into a mask storage in FOCAS along with one or two mask plates, and inserted into the optical path by using the existing mask exchange mechanism. This concept allow us flexible operation such as Targets of Opportunity observations. High reflectivity of multilayer dielectric coatings offers high throughput (>80%) of the IFU. In this paper, we will report a final optical layout, its performances, and results of prototyping works.
Minamisawa, T; Hirokaga, K
1995-11-01
The open-field activity of first-generation (F1) hybrid male C57BL/6 x C3H mice irradiated with gamma rays on day 14 of gestation was studied at the following ages: 6-7 months (young), 12-13 months (adult) and 19-20 months (old). Doses were 0.5 Gy or 1.0 Gy. Open-field activity was recorded with a camera. The camera output signal was recorded every second through an A/D converter to a personal computer. The field was divided into 25 8-cm2 units. All recordings were continuous for 60 min. The walking speed of the 1.0-Gy group recorded at 19-20 months was higher than that for the comparably aged control group. The time which the irradiated group, recorded at 19-20 months, spent in the corner fields was high in comparison with the control group at the same age. Conversely, the time spent by the irradiated group in the middle fields when recorded at 19-20 months was shorter than in the comparably aged control group. No effect of radiation was shown for any of the behaviors observed and recorded at 6-7 and 12-13 months. The results demonstrate that such exposure to gamma rays on day 14 of gestation results in behavioral changes which occur at 19-20 months but not at 6-7 or 12-13 months.
NIRCam: Development and Testing of the JWST Near-Infrared Camera
NASA Technical Reports Server (NTRS)
Greene, Thomas; Beichman, Charles; Gully-Santiago, Michael; Jaffe, Daniel; Kelly, Douglas; Krist, John; Rieke, Marcia; Smith, Eric H.
2011-01-01
The Near Infrared Camera (NIRCam) is one of the four science instruments of the James Webb Space Telescope (JWST). Its high sensitivity, high spatial resolution images over the 0.6 - 5 microns wavelength region will be essential for making significant findings in many science areas as well as for aligning the JWST primary mirror segments and telescope. The NIRCam engineering test unit was recently assembled and has undergone successful cryogenic testing. The NIRCam collimator and camera optics and their mountings are also progressing, with a brass-board system demonstrating relatively low wavefront error across a wide field of view. The flight model?s long-wavelength Si grisms have been fabricated, and its coronagraph masks are now being made. Both the short (0.6 - 2.3 microns) and long (2.4 - 5.0 microns) wavelength flight detectors show good performance and are undergoing final assembly and testing. The flight model subsystems should all be completed later this year through early 2011, and NIRCam will be cryogenically tested in the first half of 2011 before delivery to the JWST integrated science instrument module (ISIM).
Mitsubishi thermal imager using the 512 x 512 PtSi focal plane arrays
NASA Astrophysics Data System (ADS)
Fujino, Shotaro; Miyoshi, Tetsuo; Yokoh, Masataka; Kitahara, Teruyoshi
1990-01-01
MITSUBISHI THERMAL IMAGER model IR-5120A is high resolution and high sensitivity infrared television imaging system. It was exhibited in SPIE'S 1988 Technical Symposium on OPTICS, ELECTRO-OPTICS, and SENSORS, held at April 1988 Orlando, and acquired interest of many attendants of the symposium for it's high performance. The detector is a Platinium Silicide Charge Sweep Device (CSD) array containing more than 260,000 individual pixels manufactured by Mitsubishi Electric Co. The IR-5120A consists of a Camera Head. containing the CSD, a stirling cycle cooler and support electronics, and a Camera Control Unit containing the pixel fixed pattern noise corrector, video controllor, cooler driver and support power supplies. The stirling cycle cooler built into the Camera Head is used for keeping CSD temperature of approx. 80K with the features such as light weight, long life of more than 2000 hours and low acoustical noise. This paper describes an improved Thermal Imager, with more light weight, compact size and higher performance, and it's design philosophy, characteristics and field image.
Jacob, Julie; Paques, Michel; Krivosic, Valérie; Dupas, Bénédicte; Erginay, Ali; Tadayoni, Ramin; Gaudric, Alain
2017-01-01
To analyze cone mosaic metrics on adaptive optics (AO) images as a function of retinal eccentricity in two different age groups using a commercial flood illumination AO device. Fifty-three eyes of 28 healthy subjects divided into two age groups were imaged using an AO flood-illumination camera (rtx1; Imagine Eyes, Orsay, France). A 16° × 4° field was obtained horizontally. Cone-packing metrics were determined in five neighboring 50 µm × 50 µm regions. Both retinal (cones/mm 2 and µm) and visual (cones/degrees 2 and arcmin) units were computed. Results for cone mosaic metrics at 2°, 2.5°, 3°, 4°, and 5° eccentricity were compatible with previous AO scanning laser ophthalmoscopy and histology data. No significant difference was observed between the two age groups. The rtx1 camera enabled reproducible measurements of cone-packing metrics across the extrafoveal retina. These findings may contribute to the development of normative data and act as a reference for future research. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:45-50.]. Copyright 2017, SLACK Incorporated.
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. At the bottom is the Flight Support System with the Soft Capture mechanism. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
Multi-camera digital image correlation method with distributed fields of view
NASA Astrophysics Data System (ADS)
Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata
2017-11-01
A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.
The Texas Thermal Interface: A real-time computer interface for an Inframetrics infrared camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storek, D.J.; Gentle, K.W.
1996-03-01
The Texas Thermal Interface (TTI) offers an advantageous alternative to the conventional video path for computer analysis of infrared images from Inframetrics cameras. The TTI provides real-time computer data acquisition of 48 consecutive fields (version described here) with 8-bit pixels. The alternative requires time-consuming individual frame grabs from video tape with frequent loss of resolution in the D/A/D conversion. Within seconds after the event, the TTI temperature files may be viewed and processed to infer heat fluxes or other quantities as needed. The system cost is far less than commercial units which offer less capability. The system was developed formore » and is being used to measure heat fluxes to the plasma-facing components in a tokamak. {copyright} {ital 1996 American Institute of Physics.}« less
Video systems for real-time oil-spill detection
NASA Technical Reports Server (NTRS)
Millard, J. P.; Arvesen, J. C.; Lewis, P. L.; Woolever, G. F.
1973-01-01
Three airborne television systems are being developed to evaluate techniques for oil-spill surveillance. These include a conventional TV camera, two cameras operating in a subtractive mode, and a field-sequential camera. False-color enhancement and wavelength and polarization filtering are also employed. The first of a series of flight tests indicates that an appropriately filtered conventional TV camera is a relatively inexpensive method of improving contrast between oil and water. False-color enhancement improves the contrast, but the problem caused by sun glint now limits the application to overcast days. Future effort will be aimed toward a one-camera system. Solving the sun-glint problem and developing the field-sequential camera into an operable system offers potential for color 'flagging' oil on water.
Lensless imaging for wide field of view
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Yagi, Yasushi
2015-02-01
It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.
Volumetric particle image velocimetry with a single plenoptic camera
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.
2015-11-01
A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.
High Speed Digital Camera Technology Review
NASA Technical Reports Server (NTRS)
Clements, Sandra D.
2009-01-01
A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riot, V J; Olivier, S; Bauman, B
2012-05-24
The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less
Systems and methods for maintaining multiple objects within a camera field-of-view
Gans, Nicholas R.; Dixon, Warren
2016-03-15
In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.
Feasibility and accuracy assessment of light field (plenoptic) PIV flow-measurement technique
NASA Astrophysics Data System (ADS)
Shekhar, Chandra; Ogawa, Syo; Kawaguchi, Tatsuya
A light field camera can enable measurement of all the three velocity components of a flow field inside a three-dimensional volume when implemented in a PIV measurement. Due to the usage of only one camera, the measurement procedure gets greatly simplified, as well as measurement of the flows with limited visual access also becomes possible. Due to these advantages, light field cameras and their usage in PIV measurements are actively studied. The overall procedure of obtaining an instantaneous flow field consists of imaging a seeded flow at two closely separated time instants, reconstructing the two volumetric distributions of the particles using algorithms such as MART, followed by obtaining the flow velocity through cross-correlations. In this study, we examined effects of various configuration parameters of a light field camera on the in-plane and the depth resolutions, obtained near-optimal parameters in a given case, and then used it to simulate a PIV measurement scenario in order to assess the reconstruction accuracy.
Embedded ubiquitous services on hospital information systems.
Kuroda, Tomohiro; Sasaki, Hiroshi; Suenaga, Takatoshi; Masuda, Yasushi; Yasumuro, Yoshihiro; Hori, Kenta; Ohboshi, Naoki; Takemura, Tadamasa; Chihara, Kunihiro; Yoshihara, Hiroyuki
2012-11-01
A Hospital Information Systems (HIS) have turned a hospital into a gigantic computer with huge computational power, huge storage and wired/wireless local area network. On the other hand, a modern medical device, such as echograph, is a computer system with several functional units connected by an internal network named a bus. Therefore, we can embed such a medical device into the HIS by simply replacing the bus with the local area network. This paper designed and developed two embedded systems, a ubiquitous echograph system and a networked digital camera. Evaluations of the developed systems clearly show that the proposed approach, embedding existing clinical systems into HIS, drastically changes productivity in the clinical field. Once a clinical system becomes a pluggable unit for a gigantic computer system, HIS, the combination of multiple embedded systems with application software designed under deep consideration about clinical processes may lead to the emergence of disruptive innovation in the clinical field.
VizieR Online Data Catalog: UWISH2 extended H2 emission line sources (Froebrich+, 2015)
NASA Astrophysics Data System (ADS)
Froebrich, D.; Makin, S. V.; Davis, C. J.; Gledhill, T. M.; Kim, Y.; Koo, B.-C.; Rowles, J.; Eisloffel, J.; Nicholas, J.; Lee, J. J.; Williamson, J.; Buckner, A. S. M.
2016-07-01
All data were acquired using the Wide Field Camera (WFCAM) on the United Kingdom Infrared Telescope (UKIRT), Mauna Kea, Hawaii. WFCAM houses four Rockwell Hawaii-II (HgCdTe 2048x2048-pixel) arrays spaced by 94 per cent in the focal plane. The pixel scale measures 0.4-arcsec, although microstepping is used to generate reduced mosaics with a 0.2-arcsec pixel scale and thereby fully sample the expected seeing. (3 data files).
Observations of Infantry Courses: Implications for Land Warrior (LW) Training.
2000-01-01
PAQ -13 TWS and M68 CCO are not as prevalent in the force, as they are being fielded currently. Some units have prototype bore sight devices. The video...advanced infantry marksmanship (AIM) training using the AN/ PAQ -4C aiming light and the AN/PVS-7B/D or AN/PVS-14 night vision goggles (NVGs), but...the AN/ PAQ -4C infrared aiming light (IAL), the AN/PEQ-2A target pointer illuminator/aiming light (TPIAL), the video camera (now called the
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
The USNO-UKIRT K-band Hemisphere Survey
NASA Astrophysics Data System (ADS)
Dahm, Scott; Bruursema, Justice; Munn, Jeffrey A.; Vrba, Fred J.; Dorland, Bryan; Dye, Simon; Kerr, Tom; Varricatt, Watson; Irwin, Mike; Lawrence, Andy; McLaren, Robert; Hodapp, Klaus; Hasinger, Guenther
2018-01-01
We present initial results from the United States Naval Observatory (USNO) and UKIRT K-band Hemisphere Survey (U2HS), currently underway using the Wide Field Camera (WFCAM) installed on UKIRT on Maunakea. U2HS is a collaborative effort undertaken by USNO, the Institute for Astronomy, University of Hawaii, the Cambridge Astronomy Survey Unit (CASU) and the Wide Field Astronomy Unit (WFAU) in Edinburgh. The principal objective of the U2HS is to provide continuous northern hemisphere K-band coverage over a declination range of δ=0o – +60o by combining over 12,700 deg2 of new imaging with the existing UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS), Galactic Plane Survey (GPS) and Galactic Cluster Survey (GCS). U2HS will achieve a 5-σ point source sensitivity of K~18.4 mag (Vega), over three magnitudes deeper than the Two Micron All Sky Survey (2MASS). In this contribution we discuss survey design, execution, data acquisition and processing, photometric calibration and quality control. The data obtained by the U2HS will be made publicly available through the Wide Field Science Archive (WSA) maintained by the WFAU.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Wide Field and Planetary Camera for Space Telescope
NASA Technical Reports Server (NTRS)
Lockhart, R. F.
1982-01-01
The Space Telescope's Wide Field and Planetary Camera instrument, presently under construction, will be used to map the observable universe and to study the outer planets. It will be able to see 1000 times farther than any previously employed instrument. The Wide Field system will be located in a radial bay, receiving its signals via a pick-off mirror centered on the optical axis of the telescope assembly. The external thermal radiator employed by the instrument for cooling will be part of the exterior surface of the Space Telescope. In addition to having a larger (1200-12,000 A) wavelength range than any of the other Space Telescope instruments, its data rate, at 1 Mb/sec, exceeds that of the other instruments. Attention is given to the operating modes and projected performance levels of the Wide Field Camera and Planetary Camera.
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
Characterization of Vegetation using the UC Davis Remote Sensing Testbed
NASA Astrophysics Data System (ADS)
Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.
2006-12-01
Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.
Application of infrared camera to bituminous concrete pavements: measuring vehicle
NASA Astrophysics Data System (ADS)
Janků, Michal; Stryk, Josef
2017-09-01
Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.
Adaptive DOF for plenoptic cameras
NASA Astrophysics Data System (ADS)
Oberdörster, Alexander; Lensch, Hendrik P. A.
2013-03-01
Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Imaging experiment: The Viking Lander
Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Morris, E.C.; Sagan, C.; Young, A.T.
1972-01-01
The Viking Lander Imaging System will consist of two identical facsimile cameras. Each camera has a high-resolution mode with an instantaneous field of view of 0.04??, and survey and color modes with instantaneous fields of view of 0.12??. Cameras are positioned one meter apart to provide stereoscopic coverage of the near-field. The Imaging Experiment will provide important information about the morphology, composition, and origin of the Martian surface and atmospheric features. In addition, lander pictures will provide supporting information for other experiments in biology, organic chemistry, meteorology, and physical properties. ?? 1972.
High-spatial-resolution K-band Imaging of Select K2 Campaign Fields
NASA Astrophysics Data System (ADS)
Colón, Knicole D.; Howell, Steve B.; Ciardi, David R.; Barclay, Thomas
2017-12-01
NASA's K2 mission began observing fields along the ecliptic plane in 2014. Each observing campaign lasts approximately 80 days, during which high-precision optical photometry of select astrophysical targets is collected by the Kepler spacecraft. Due to the 4 arcsec pixel scale of the Kepler photometer, significant blending between the observed targets can occur (especially in dense fields close to the Galactic plane). We undertook a program to use the Wide Field Camera (WFCAM) on the 3.8 m United Kingdom InfraRed Telescope (UKIRT) to collect high-spatial-resolution near-infrared images of targets in select K2 campaign fields, which we report here. These 0.4 arcsec resolution K-band images offer the opportunity to perform a variety of science, including vetting exoplanet candidates by identifying nearby stars blended with the target star and estimating the size, color, and type of galaxies observed by K2.
A method for the real-time construction of a full parallax light field
NASA Astrophysics Data System (ADS)
Tanaka, Kenji; Aoki, Soko
2006-02-01
We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
Zhu, Banghe; Rasmussen, John C.; Litorja, Maritoni
2017-01-01
To date, no emerging preclinical or clinical near-infrared fluorescence (NIRF) imaging devices for non-invasive and/or surgical guidance have their performances validated on working standards with SI units of radiance that enable comparison or quantitative quality assurance. In this work, we developed and deployed a methodology to calibrate a stable, solid phantom for emission radiance with units of mW · sr−1 · cm−2 for use in characterizing the measurement sensitivity of ICCD and IsCMOS detection, signal-to-noise ratio, and contrast. In addition, at calibrated radiances, we assess transverse and lateral resolution of ICCD and IsCMOS camera systems. The methodology allowed determination of superior SNR of the ICCD over the IsCMOS camera system and superior resolution of the IsCMOS over the ICCD camera system. Contrast depended upon the camera settings (binning and integration time) and gain of intensifier. Finally, because of architecture of CMOS and CCD camera systems resulting in vastly different performance, we comment on the utility of these systems for small animal imaging as well as clinical applications for non-invasive and surgical guidance. PMID:26552078
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
Engineering design criteria for an image intensifier/image converter camera
NASA Technical Reports Server (NTRS)
Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.
1976-01-01
The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
VizieR Online Data Catalog: Differential photometry of the EB* HATS551-027 (Zhou+, 2015)
NASA Astrophysics Data System (ADS)
Zhou, G.; Bayliss, D.; Hartman, J. D.; Rabus, M.; Bakos, G. A.; Jordan, A.; Brahm, R.; Penev, K.; Csubry, Z.; Mancini, L.; Espinoza, N.; de Val-Borro, M.; Bhatti, W.; Ciceri, S.; Henning, T.; Schmidt, B.; Murphy, S. J.; Butler, R. P.; Arriagada, P.; Shectman, S.; Crane, J.; Thompson, I.; Suc, V.; Noyes, R. W.
2017-11-01
The eclipses of HATS551-027 were first identified by observations from the HATSouth survey (Bakos et al. 2013PASP..125..154B). HATSouth is a global network of identical, fully robotic telescopes, providing continuous monitoring of selected 128 deg2 fields of the southern sky. A total of 16622 observations of HATS551-027 were obtained from HATSouth units HS-1, HS-2 in Chile, HS-3, HS-4 in Namibia, and HS-6 in Australia from 2009 September to 2010 September. Two secondary eclipses of HATS551-027 were observed by the Merope camera on 2-m Faulkes Telescope South (FTS), at Siding Spring Observatory, on 2012 December 12 and 2013 March 20. A near-complete primary eclipse of HATS551-027 was observed by the SITe#3 camera on the Swope 1 m telescope at Las Campanas Observatory, Chile, on 2013 February 26. (1 data file).
VizieR Online Data Catalog: Candidate X-ray OB stars in MYStIX regions (Povich+, 2017)
NASA Astrophysics Data System (ADS)
Povich, M. S.; Busk, H. A.; Feigelson, E. D.; Townsley, L. K.; Kuhn, M. A.
2017-10-01
X-ray point source catalogs for the 18 Massive Young Star-forming Complex Study in Infrared and X-Rays (MYStIX) regions studied here were produced by Kuhn+ (2010, J/ApJ/725/2485 and 2013, J/ApJS/209/27) and Townsley (2014+, J/ApJS/213/1) from archival Chandra Advanced CCD Imaging Camera (ACIS) observations. MYStIX JHKs NIR photometry was obtained from images taken with the United Kingdom Infrared Telescope (UKIRT) Wide-field Camera or from the Two-Micron All-Sky Survey (2MASS). See section 2 for further details. Spitzer MIR photometry at 3.6, 4.5, 5.8, and 8.0um was provided either by the Galactic Legacy Mid-Plane Survey Extraordinaire (GLIMPSE; Benjamin+ 2003PASP..115..953B) or by Kuhn+ (2013, J/ApJS/209/29). (4 data files).
NASA Astrophysics Data System (ADS)
Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin
2018-05-01
Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.
Wide-field Fluorescent Microscopy and Fluorescent Imaging Flow Cytometry on a Cell-phone
Zhu, Hongying; Ozcan, Aydogan
2013-01-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. ~ 10 μm over a very large field-of-view of ~ 81 mm2. This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water. PMID:23603893
Wide-field fluorescent microscopy and fluorescent imaging flow cytometry on a cell-phone.
Zhu, Hongying; Ozcan, Aydogan
2013-04-11
Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. - 10 μm over a very large field-of-view of - 81 mm(2). This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water.
Airport Remote Tower Sensor Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Gawdiak, Yuri; Leidichj, Christopher; Papasin, Richard; Tran, Peter B.; Bass, Kevin
2006-01-01
Networks of video cameras, meteorological sensors, and ancillary electronic equipment are under development in collaboration among NASA Ames Research Center, the Federal Aviation Administration (FAA), and the National Oceanic Atmospheric Administration (NOAA). These networks are to be established at and near airports to provide real-time information on local weather conditions that affect aircraft approaches and landings. The prototype network is an airport-approach-zone camera system (AAZCS), which has been deployed at San Francisco International Airport (SFO) and San Carlos Airport (SQL). The AAZCS includes remotely controlled color video cameras located on top of SFO and SQL air-traffic control towers. The cameras are controlled by the NOAA Center Weather Service Unit located at the Oakland Air Route Traffic Control Center and are accessible via a secure Web site. The AAZCS cameras can be zoomed and can be panned and tilted to cover a field of view 220 wide. The NOAA observer can see the sky condition as it is changing, thereby making possible a real-time evaluation of the conditions along the approach zones of SFO and SQL. The next-generation network, denoted a remote tower sensor system (RTSS), will soon be deployed at the Half Moon Bay Airport and a version of it will eventually be deployed at Los Angeles International Airport. In addition to remote control of video cameras via secure Web links, the RTSS offers realtime weather observations, remote sensing, portability, and a capability for deployment at remote and uninhabited sites. The RTSS can be used at airports that lack control towers, as well as at major airport hubs, to provide synthetic augmentation of vision for both local and remote operations under what would otherwise be conditions of low or even zero visibility.
Localization and Mapping Using a Non-Central Catadioptric Camera System
NASA Astrophysics Data System (ADS)
Khurana, M.; Armenakis, C.
2018-05-01
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
NASA Astrophysics Data System (ADS)
Rossi, Marco; Pierron, Fabrice; Forquin, Pascal
2014-02-01
Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.
The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.
Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco
2015-01-01
Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.
Analysis of southwest propagating TIDs in the western United States
NASA Astrophysics Data System (ADS)
Kendall, E. A.; Bhatt, A.
2016-12-01
The MANGO network of 630 nm all-sky imagers in the continental United States has observed a number of westward propagating traveling ionospheric disturbances (TIDs). These TIDs include southwestward waves typically associated with Perkins electrodynamic instability, and also northwestward waves of unknown cause. A peak in the wave activity was observed during the summer of 2016 in the western US. Many of the observed structures evolve during their passage through the camera field of view. The southwestward propagating TIDs observed over California are often tilted westward or slightly northward, which may be a function of magnetic field declination. We will present analysis of MANGO network data along with GPS TEC data. This analysis will include shapes and sizes of the observed structures along with their velocities. We will present results from geomagnetic, seasonal and local time variations associated with observed TIDs. Wherever possible, we will include data from the broader MANGO network that is now taking data over the continental United States and compare with data from Boston University imagers in Massachusetts and Texas.
Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
NASA Technical Reports Server (NTRS)
Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)
2014-01-01
A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.
Optical analysis of a compound quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Wall, S. D.; Burcher, E. E.; Huck, F. O.
1974-01-01
A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.
Graphic Arts: Process Camera, Stripping, and Platemaking. Third Edition.
ERIC Educational Resources Information Center
Crummett, Dan
This document contains teacher and student materials for a course in graphic arts concentrating on camera work, stripping, and plate making in the printing process. Eight units of instruction cover the following topics: (1) the process camera and darkroom equipment; (2) line photography; (3) halftone photography; (4) other darkroom techniques; (5)…
Report Of The HST Strategy Panel: A Strategy For Recovery
1991-01-01
orbit change out: the Wide Field/Planetary Camera II (WFPC II), the Near-Infrared Camera and Multi- Object Spectrometer (NICMOS) and the Space ...are the Space Telescope Imaging Spectrograph (STB), the Near-Infrared Camera and Multi- Object Spectrom- eter (NICMOS), and the second Wide Field and...expected to fail to lock due to duplicity was 20%; on- orbit data indicates that 10% may be a better estimate, but the guide stars were preselected
NASA Astrophysics Data System (ADS)
Gonzaga, S.; et al.
2011-03-01
ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.
2012-03-08
to-Use 3-D Camera For Measurements in Turbulent Flow Fields B Thurow, Auburn Near Mid Far Conventional imaging Plenoptic imaging Conventional 2...depth-of-field and blur Reduced aperture (restricted angular information) leads to low signal levels Lightfield Imaging Plenoptic camera records
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Stallmann, D.; Tschirschwitz, F.
2016-06-01
For mapping of building interiors various 2D and 3D indoor surveying systems are available today. These systems essentially differ from each other by price and accuracy as well as by the effort required for fieldwork and post-processing. The Laboratory for Photogrammetry & Laser Scanning of HafenCity University (HCU) Hamburg has developed, as part of an industrial project, a lowcost indoor mapping system, which enables systematic inventory mapping of interior facilities with low staffing requirements and reduced, measurable expenditure of time and effort. The modelling and evaluation of the recorded data take place later in the office. The indoor mapping system of HCU Hamburg consists of the following components: laser range finder, panorama head (pan-tilt-unit), single-board computer (Raspberry Pi) with digital camera and battery power supply. The camera is pre-calibrated in a photogrammetric test field under laboratory conditions. However, remaining systematic image errors are corrected simultaneously within the generation of the panorama image. Due to cost reasons the camera and laser range finder are not coaxially arranged on the panorama head. Therefore, eccentricity and alignment of the laser range finder against the camera must be determined in a system calibration. For the verification of the system accuracy and the system calibration, the laser points were determined from measurements with total stations. The differences to the reference were 4-5mm for individual coordinates.
Charge Diffusion Variations in Pan-STARRS1 CCDs
NASA Astrophysics Data System (ADS)
Magnier, Eugene A.; Tonry, J. L.; Finkbeiner, D.; Schlafly, E.; Burgett, W. S.; Chambers, K. C.; Flewelling, H. A.; Hodapp, K. W.; Kaiser, N.; Kudritzki, R.-P.; Metcalfe, N.; Wainscoat, R. J.; Waters, C. Z.
2018-06-01
Thick back-illuminated deep-depletion CCDs have superior quantum efficiency over previous generations of thinned and traditional thick CCDs. As a result, they are being used for wide-field imaging cameras in several major projects. We use observations from the Pan-STARRS 3π survey to characterize the behavior of the deep-depletion devices used in the Pan-STARRS 1 Gigapixel Camera. We have identified systematic spatial variations in the photometric measurements and stellar profiles that are similar in pattern to the so-called “tree rings” identified in devices used by other wide-field cameras (e.g., DECam and Hypersuprime Camera). The tree-ring features identified in these other cameras result from lateral electric fields that displace the electrons as they are transported in the silicon to the pixel location. In contrast, we show that the photometric and morphological modifications observed in the GPC1 detectors are caused by variations in the vertical charge transportation rate and resulting charge diffusion variations.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Yanle, E-mail: Hu.Yanle@mayo.edu; Rankine, Leith; Green, Olga L.
Purpose: To characterize the performance of the onboard imaging unit for the first clinical magnetic resonance image guided radiation therapy (MR-IGRT) system. Methods: The imaging performance characterization included four components: ACR (the American College of Radiology) phantom test, spatial integrity, coil signal to noise ratio (SNR) and uniformity, and magnetic field homogeneity. The ACR phantom test was performed in accordance with the ACR phantom test guidance. The spatial integrity test was evaluated using a 40.8 × 40.8 × 40.8 cm{sup 3} spatial integrity phantom. MR and computed tomography (CT) images of the phantom were acquired and coregistered. Objects were identifiedmore » around the surfaces of 20 and 35 cm diameters of spherical volume (DSVs) on both the MR and CT images. Geometric distortion was quantified using deviation in object location between the MR and CT images. The coil SNR test was performed according to the national electrical manufacturers association (NEMA) standards MS-1 and MS-9. The magnetic field homogeneity test was measured using field camera and spectral peak methods. Results: For the ACR tests, the slice position error was less than 0.10 cm, the slice thickness error was less than 0.05 cm, the resolved high-contrast spatial resolution was 0.09 cm, the resolved low-contrast spokes were more than 25, the image intensity uniformity was above 93%, and the percentage ghosting was less than 0.22%. All were within the ACR recommended specifications. The maximum geometric distortions within the 20 and 35 cm DSVs were 0.10 and 0.18 cm for high spatial resolution three-dimensional images and 0.08 and 0.20 cm for high temporal resolution two dimensional cine images based on the distance-to-phantom-center method. The average SNR was 12.0 for the body coil, 42.9 for the combined torso coil, and 44.0 for the combined head and neck coil. Magnetic field homogeneities at gantry angles of 0°, 30°, 60°, 90°, and 120° were 23.55, 20.43, 18.76, 19.11, and 22.22 ppm, respectively, using the field camera method over the 45 cm DSV. Conclusions: The onboard imaging unit of the first commercial MR-IGRT system meets ACR, NEMA, and vendor specifications.« less
Hu, Yanle; Rankine, Leith; Green, Olga L; Kashani, Rojano; Li, H Harold; Li, Hua; Nana, Roger; Rodriguez, Vivian; Santanam, Lakshmi; Shvartsman, Shmaryu; Victoria, James; Wooten, H Omar; Dempsey, James F; Mutic, Sasa
2015-10-01
To characterize the performance of the onboard imaging unit for the first clinical magnetic resonance image guided radiation therapy (MR-IGRT) system. The imaging performance characterization included four components: ACR (the American College of Radiology) phantom test, spatial integrity, coil signal to noise ratio (SNR) and uniformity, and magnetic field homogeneity. The ACR phantom test was performed in accordance with the ACR phantom test guidance. The spatial integrity test was evaluated using a 40.8 × 40.8 × 40.8 cm(3) spatial integrity phantom. MR and computed tomography (CT) images of the phantom were acquired and coregistered. Objects were identified around the surfaces of 20 and 35 cm diameters of spherical volume (DSVs) on both the MR and CT images. Geometric distortion was quantified using deviation in object location between the MR and CT images. The coil SNR test was performed according to the national electrical manufacturers association (NEMA) standards MS-1 and MS-9. The magnetic field homogeneity test was measured using field camera and spectral peak methods. For the ACR tests, the slice position error was less than 0.10 cm, the slice thickness error was less than 0.05 cm, the resolved high-contrast spatial resolution was 0.09 cm, the resolved low-contrast spokes were more than 25, the image intensity uniformity was above 93%, and the percentage ghosting was less than 0.22%. All were within the ACR recommended specifications. The maximum geometric distortions within the 20 and 35 cm DSVs were 0.10 and 0.18 cm for high spatial resolution three-dimensional images and 0.08 and 0.20 cm for high temporal resolution two dimensional cine images based on the distance-to-phantom-center method. The average SNR was 12.0 for the body coil, 42.9 for the combined torso coil, and 44.0 for the combined head and neck coil. Magnetic field homogeneities at gantry angles of 0°, 30°, 60°, 90°, and 120° were 23.55, 20.43, 18.76, 19.11, and 22.22 ppm, respectively, using the field camera method over the 45 cm DSV. The onboard imaging unit of the first commercial MR-IGRT system meets ACR, NEMA, and vendor specifications.
NASA Astrophysics Data System (ADS)
Divine, Dmitry; Pedersen, Christina; Karlsen, Tor Ivan; Aas, Harald; Granskog, Mats; Renner, Angelika; Spreen, Gunnar; Gerland, Sebastian
2013-04-01
A new thin-ice Arctic paradigm requires reconsideration of the set of parameterizations of mass and energy exchange within the ocean-sea-ice-atmosphere system used in modern CGCMs. Such a reassessment would require a comprehensive collection of measurements made specifically on first-year pack ice with a focus on summer melt season when the difference from typical conditions for the earlier multi-year Arctic sea ice cover becomes most pronounced. Previous in situ studies have demonstrated a crucial importance of smaller (i.e. less than 10 m) scale surface topography features for the seasonal evolution of pack ice. During 2011-2012 NPI developed a helicopter borne ICE stereocamera system intended for mapping the sea ice surface topography and aerial photography. The hardware component of the system comprises two Canon 5D Mark II cameras, combined GPS/INS unit by "Novatel" and a laser altimeter mounted in a single enclosure outside the helicopter. The unit is controlled by a PXI chassis mounted inside the helicopter cabin. The ICE stereocamera system was deployed for the first time during the 2012 summer field season. The hardware setup has proven to be highly reliable and was used in about 30 helicopter flights over Arctic sea-ice during July-September. Being highly automated it required a minimal human supervision during in-flight operation. The deployment of the camera system was mostly done in combination with the EM-bird, which measures sea-ice thickness, and this combination provides an integrated view of sea ice cover along the flight track. During the flight the cameras shot sequentially with a time interval of 1 second each to ensure sufficient overlap between subsequent images. Some 35000 images of sea ice/water surface captured per camera sums into 6 Tb of data collected during its first field season. The reconstruction of the digital elevation model of sea ice surface will be done using SOCET SET commercial software. Refraction at water/air interface can also be taken into account, providing the valuable data on melt pond coverage, depth and bottom topography -the primary goals for the system at its present stage. Preliminary analysis of the reconstructed 3D scenes of ponded first year ice for some selected sites has shown a good agreement with in situ measurements demonstrating a good scientific potential of the ICE stereocamera system.
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
Validation of geometric models for fisheye lenses
NASA Astrophysics Data System (ADS)
Schneider, D.; Schwalbe, E.; Maas, H.-G.
The paper focuses on the photogrammetric investigation of geometric models for different types of optical fisheye constructions (equidistant, equisolid-angle, sterographic and orthographic projection). These models were implemented and thoroughly tested in a spatial resection and a self-calibrating bundle adjustment. For this purpose, fisheye images were taken with a Nikkor 8 mm fisheye lens on a Kodak DSC 14n Pro digital camera in a hemispherical calibration room. Both, the spatial resection and the bundle adjustment resulted in a standard deviation of unit weight of 1/10 pixel with a suitable set of simultaneous calibration parameters introduced into the camera model. The camera-lens combination was treated with all of the four basic models mentioned above. Using the same set of additional lens distortion parameters, the differences between the models can largely be compensated, delivering almost the same precision parameters. The relative object space precision obtained from the bundle adjustment was ca. 1:10 000 of the object dimensions. This value can be considered as a very satisfying result, as fisheye images generally have a lower geometric resolution as a consequence of their large field of view and also have a inferior imaging quality in comparison to most central perspective lenses.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the alpha-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned following four steps in order to reduce standing time alignment me. 1: is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm). 2: The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3: CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4: Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Development of Flight Slit-Jaw Optics for Chromospheric Lyman-Alpha SpectroPolarimeter
NASA Technical Reports Server (NTRS)
Kubo, Masahito; Suematsu, Yoshinori; Kano, Ryohei; Bando, Takamasa; Hara, Hirohisa; Narukage, Noriyuki; Katsukawa, Yukio; Ishikawa, Ryoko; Ishikawa, Shin-nosuke; Kobiki, Toshihiko;
2015-01-01
In sounding rocket experiment CLASP, I have placed a slit a mirror-finished around the focal point of the telescope. The light reflected by the mirror surface surrounding the slit is then imaged in Slit-jaw optical system, to obtain the a-ray Lyman secondary image. This image, not only to use the real-time image in rocket flight rocket oriented direction selection, and also used as a scientific data showing the spatial structure of the Lyman alpha emission line intensity distribution and solar chromosphere around the observation area of the polarimetric spectroscope. Slit-jaw optical system is a two off-axis mirror unit part including a parabolic mirror and folding mirror, Lyman alpha transmission filter, the optical system magnification 1x consisting camera. The camera is supplied from the United States, and the other was carried out fabrication and testing in all the Japanese side. Slit-jaw optical system, it is difficult to access the structure, it is necessary to install the low place clearance. Therefore, influence the optical performance, the fine adjustment is necessary optical elements are collectively in the form of the mirror unit. On the other hand, due to the alignment of the solar sensor in the US launch site, must be removed once the Lyman alpha transmission filter holder including a filter has a different part from the mirror unit. In order to make the structure simple, stray light measures Aru to concentrate around Lyman alpha transmission filter. To overcome the difficulties of performing optical alignment in Lyman alpha wavelength absorbed by the atmosphere, it was planned 'following four steps in order to reduce standing time alignment me. 1. is measured in advance refractive index at Lyman alpha wavelength of Lyman alpha transmission filter (121.567nm), to prepare a visible light Firuwo having the same optical path length in the visible light (630nm).2. The mirror structure CLASP before mounting unit standing, dummy slit and camera standing prescribed position in leading frame is, to complete the internal alignment adjustment. 3. CLASP structure F mirror unit and by attaching the visible light filter, as will plague the focus is carried out in standing position adjustment visible flight products camera. 4. Replace the Lyman alpha transmission filter, it is confirmed by Lyman alpha wavelength (under vacuum) the requested optical performance have come. Currently, up to 3 of the steps completed, it was confirmed in the visible light optical performance that satisfies the required value sufficiently extended. Also, put in Slit-jaw optical system the sunlight through the telescope of CLASP, it is also confirmed that and that stray light rejection no vignetting is in the field of view meets request standing.
Pilot Fullerton dons EES anti-gravity suit lower torso on middeck
1982-03-30
STS003-23-161 (24 March 1982) --- Astronaut C. Gordon Fullerton, STS-3 pilot, dons an olive drab inner garment which complements the space shuttle Extravehicular Mobility Unit (EMU) spacesuit. Since there are no plans for an extravehicular activity (EVA) on the flight, Fullerton is just getting some practice time ?in the field? as he is aboard the Earth-orbiting Columbia. He is in the middeck area of the vehicle. The photograph was taken with a 35mm camera by astronaut Jack R. Lousma, STS-3 commander. Photo credit: NASA
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen
2015-09-21
Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
LIFTING THE VEIL OF DUST TO REVEAL THE SECRETS OF SPIRAL GALAXIES
NASA Technical Reports Server (NTRS)
2002-01-01
Astronomers have combined information from the NASA Hubble Space Telescope's visible- and infrared-light cameras to show the hearts of four spiral galaxies peppered with ancient populations of stars. The top row of pictures, taken by a ground-based telescope, represents complete views of each galaxy. The blue boxes outline the regions observed by the Hubble telescope. The bottom row represents composite pictures from Hubble's visible- and infrared-light cameras, the Wide Field and Planetary Camera 2 (WFPC2) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Astronomers combined views from both cameras to obtain the true ages of the stars surrounding each galaxy's bulge. The Hubble telescope's sharper resolution allows astronomers to study the intricate structure of a galaxy's core. The galaxies are ordered by the size of their bulges. NGC 5838, an 'S0' galaxy, is dominated by a large bulge and has no visible spiral arms; NGC 7537, an 'Sbc' galaxy, has a small bulge and loosely wound spiral arms. Astronomers think that the structure of NGC 7537 is very similar to our Milky Way. The galaxy images are composites made from WFPC2 images taken with blue (4445 Angstroms) and red (8269 Angstroms) filters, and NICMOS images taken in the infrared (16,000 Angstroms). They were taken in June, July, and August of 1997. Credits for the ground-based images: Allan Sandage (The Observatories of the Carnegie Institution of Washington) and John Bedke (Computer Sciences Corporation and the Space Telescope Science Institute) Credits for WFPC2 and NICMOS composites: NASA, ESA, and Reynier Peletier (University of Nottingham, United Kingdom)
Accuracy of Wearable Cameras to Track Social Interactions in Stroke Survivors.
Dhand, Amar; Dalton, Alexandra E; Luke, Douglas A; Gage, Brian F; Lee, Jin-Moo
2016-12-01
Social isolation after a stroke is related to poor outcomes. However, a full study of social networks on stroke outcomes is limited by the current metrics available. Typical measures of social networks rely on self-report, which is vulnerable to response bias and measurement error. We aimed to test the accuracy of an objective measure-wearable cameras-to capture face-to-face social interactions in stroke survivors. If accurate and usable in real-world settings, this technology would allow improved examination of social factors on stroke outcomes. In this prospective study, 10 stroke survivors each wore 2 wearable cameras: Autographer (OMG Life Limited, Oxford, United Kingdom) and Narrative Clip (Narrative, Linköping, Sweden). Each camera automatically took a picture every 20-30 seconds. Patients mingled with healthy controls for 5 minutes of 1-on-1 interactions followed by 5 minutes of no interaction for 2 hours. After the event, 2 blinded judges assessed whether photograph sequences identified interactions or noninteractions. Diagnostic accuracy statistics were calculated. A total of 8776 photographs were taken and adjudicated. In distinguishing interactions, the Autographer's sensitivity was 1.00 and specificity was .98. The Narrative Clip's sensitivity was .58 and specificity was 1.00. The receiver operating characteristic curves of the 2 devices were statistically different (Z = 8.26, P < .001). Wearable cameras can accurately detect social interactions of stroke survivors. Likely because of its large field of view, the Autographer was more sensitive than the Narrative Clip for this purpose. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Feral Cattle in the White Rock Canyon Reserve at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hathcock, Charles D.; Hansen, Leslie A.
2014-03-27
At the request of the Los Alamos Field Office (the Field Office), Los Alamos National Security (LANS) biologists placed remote-triggered wildlife cameras in and around the mouth of Ancho Canyon in the White Rock Canyon Reserve (the Reserve) to monitor use by feral cattle. The cameras were placed in October 2012 and retrieved in January 2013. Two cameras were placed upstream in Ancho Canyon away from the Rio Grande along the perennial flows from Ancho Springs, two cameras were placed at the north side of the mouth to Ancho Canyon along the Rio Grande, and two cameras were placed atmore » the south side of the mouth to Ancho Canyon along the Rio Grande. The cameras recorded three different individual feral cows using this area as well as a variety of local native wildlife. This report details our results and issues associated with feral cattle in the Reserve. Feral cattle pose significant risks to human safety, impact cultural and biological resources, and affect the environmental integrity of the Reserve. Regional stakeholders have communicated to the Field Office that they support feral cattle removal.« less
Quadrotor helicopter for surface hydrological measurements
NASA Astrophysics Data System (ADS)
Pagano, C.; Tauro, F.; Porfiri, M.; Grimaldi, S.
2013-12-01
Surface hydrological measurements are typically performed through user-assisted and intrusive field methodologies which can be inadequate to monitor remote and extended areas. In this poster, we present the design and development of a quadrotor helicopter equipped with digital acquisition system and image calibration units for surface flow measurements. This custom-built aerial vehicle is engineered to be lightweight, low-cost, highly customizable, and stable to guarantee optimal image quality. Quadricopter stability guarantees minimal vibrations during image acquisition and, therefore, improved accuracy in flow velocity estimation through large scale particle image velocimetry algorithms or particle tracking procedures. Stability during the vehicle pitching and rolling is achieved by adopting large arm span and high-wing configurations. Further, the vehicle framework is composed of lightweight aluminum and durable carbon fiber for optimal resilience. The open source Ardupilot microcontroller is used for remote control of the quadricopter. The microcontroller includes an inertial measurement unit (IMU) equipped with accelerometers and gyroscopes for stable flight through feedback control. The vehicle is powered by a 3 cell (11.1V) 3000 mAh Lithium-polymer battery. Electronic equipment and wiring are hosted into the hollow arms and on several carbon fiber platforms in the waterproof fuselage. Four 35A high-torque motors are supported at the far end of each arm with 10 × 4.7 inch propellers. Energy dissipation during landing is accomplished by four pivoting legs that, through the use of shock absorbers, prevent the impact energy from affecting the frame thus causing significant damage. The data capturing system consists of a GoPro Hero3 camera and in-house built camera gimbal and shock absorber damping device. The camera gimbal, hosted below the vehicle fuselage, is engineered to maintain the orthogonality of the camera axis with respect to the water surface by compensating for changes in pitch and roll during flight. The constant orthogonality of the camera leads to minimal image distortions and, therefore, reduced post-processing for picture dewarping. The gimbal is based on a system of two closed-loop DC motors. The motors are controlled through an open source Martinez V3 brushless controller board and an MPU6050 IMU. The IMU is placed on the back of the camera to read the change in orientation during the flight. To prevent the physical acquisition of ground reference points for image rectification, low power red lasers facing the water surface are placed on each of the quadricopter arms at known distances. The pixel distance between the laser lights in images are then automatically converted to metric units. Experimental results from outdoor testing on water bodies are reported to demonstrate the feasibility of surface water monitoring through this mobile imaging platform.
New light field camera based on physical based rendering tracing
NASA Astrophysics Data System (ADS)
Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung
2014-03-01
Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.
Reticle stage based linear dosimeter
Berger, Kurt W [Livermore, CA
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera.
Wang, Ting-Chun; Chandraker, Manmohan; Efros, Alexei A; Ramamoorthi, Ravi
2018-03-01
Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
The system analysis of light field information collection based on the light field imaging
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-10-01
Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.
Optimising Camera Traps for Monitoring Small Mammals
Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce
2013-01-01
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
The Europa Imaging System (EIS): Investigating Europa's geology, ice shell, and current activity
NASA Astrophysics Data System (ADS)
Turtle, Elizabeth; Thomas, Nicolas; Fletcher, Leigh; Hayes, Alexander; Ernst, Carolyn; Collins, Geoffrey; Hansen, Candice; Kirk, Randolph L.; Nimmo, Francis; McEwen, Alfred; Hurford, Terry; Barr Mlinar, Amy; Quick, Lynnae; Patterson, Wes; Soderblom, Jason
2016-07-01
NASA's Europa Mission, planned for launch in 2022, will perform more than 40 flybys of Europa with altitudes at closest approach as low as 25 km. The instrument payload includes the Europa Imaging System (EIS), a camera suite designed to transform our understanding of Europa through global decameter-scale coverage, topographic and color mapping, and unprecedented sub- meter-scale imaging. EIS combines narrow-angle and wide-angle cameras to address these science goals: • Constrain the formation processes of surface features by characterizing endogenic geologic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure and potential near-surface water. • Search for evidence of recent or current activity, including potential plumes. • Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar. • Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. EIS Narrow-angle Camera (NAC): The NAC, with a 2.3°° x 1.2°° field of view (FOV) and a 10-μμrad instantaneous FOV (IFOV), achieves 0.5-m pixel scale over a 2-km-wide swath from 50-km altitude. A 2-axis gimbal enables independent targeting, allowing very high-resolution stereo imaging to generate digital topographic models (DTMs) with 4-m spatial scale and 0.5-m vertical precision over the 2-km swath from 50-km altitude. The gimbal also makes near-global (>95%) mapping of Europa possible at ≤50-m pixel scale, as well as regional stereo imaging. The NAC will also perform high-phase-angle observations to search for potential plumes. EIS Wide-angle Camera (WAC): The WAC has a 48°° x 24°° FOV, with a 218-μμrad IFOV, and is designed to acquire pushbroom stereo swaths along flyby ground-tracks. From an altitude of 50 km, the WAC achieves 11-m pixel scale over a 44-km-wide swath, generating DTMs with 32-m spatial scale and 4-m vertical precision. These data also support characterization of surface clutter for interpretation of radar deep and shallow sounding modes. Detectors: The cameras have identical rapid-readout, radiation-hard 4k x 2k CMOS detectors and can image in both pushbroom and framing modes. Color observations are acquired by pushbroom imaging using six broadband filters (~300-1050 nm), allowing mapping of surface units for correlation with geologic structures, topography, and compositional units from other instruments.
System for photometric calibration of optoelectronic imaging devices especially streak cameras
Boni, Robert; Jaanimagi, Paul
2003-11-04
A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.
Photography Foundations: The Student Photojournalist.
ERIC Educational Resources Information Center
Glowacki, Joseph W.
Designed to aid student publications photographers in taking effective photographs, this publication provides discussions relating to the following areas: a publications photographer's self-image, the camera, camera handling, using the adjustable camera, the light meter, depth of field, shutter speeds and action pictures, lenses for publications…
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
NASA Astrophysics Data System (ADS)
Taggart, D. P.; Gribble, R. J.; Bailey, A. D., III; Sugimoto, S.
Recently, a prototype soft x ray pinhole camera was fielded on FRX-C/LSM at Los Alamos and TRX at Spectra Technology. The soft x ray FRC images obtained using this camera stand out in high contrast to their surroundings. It was particularly useful for studying the FRC during and shortly after formation when, at certain operating conditions, flute-like structures at the edge and internal structures of the FRC were observed which other diagnostics could not resolve. Building on this early experience, a new soft x ray pinhole camera was installed on FRX-C/LSM, which permits more rapid data acquisition and briefer exposures. It will be used to continue studying FRC formation and to look for internal structure later in time which could be a signature of instability. The initial operation of this camera is summarized.
VizieR Online Data Catalog: PHAT. XIX. Formation history of M31 disk (Williams+, 2017)
NASA Astrophysics Data System (ADS)
Williams, B. F.; Dolphin, A. E.; Dalcanton, J. J.; Weisz, D. R.; Bell, E. F.; Lewis, A. R.; Rosenfield, P.; Choi, Y.; Skillman, E.; Monachesi, A.
2018-05-01
The data for this study come from the Panchromatic Hubble Andromeda Treasury (PHAT) survey (Dalcanton+ 2012ApJS..200...18D ; Williams+ 2014, J/ApJS/215/9). Briefly, PHAT is a multiwavelength HST survey mapping 414 contiguous HST fields of the northern M31 disk and bulge in six broad wavelength bands from the near-ultraviolet to the near-infrared. The survey obtained data in the F275W and F336W bands with the UVIS detectors of the Wide-Field Camera 3 (WFC3) camera, the F475W and F814W bands in the WFC detectors of the Advanced Camera for Surveys (ACS) camera, and the F110W and F160W bands in the IR detectors of the WFC3 camera. (4 data files).
Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.
2017-01-01
Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.
Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout
NASA Technical Reports Server (NTRS)
Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.
1997-01-01
The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus
2008-03-01
Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.
Kidd, David G; McCartt, Anne T
2016-02-01
This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Optical designs for the Mars '03 rover cameras
NASA Astrophysics Data System (ADS)
Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.
2001-12-01
In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.
Concentration solar power optimization system and method of using the same
Andraka, Charles E
2014-03-18
A system and method for optimizing at least one mirror of at least one CSP system is provided. The system has a screen for displaying light patterns for reflection by the mirror, a camera for receiving a reflection of the light patterns from the mirror, and a solar characterization tool. The solar characterization tool has a characterizing unit for determining at least one mirror parameter of the mirror based on an initial position of the camera and the screen, and a refinement unit for refining the determined parameter(s) based on an adjusted position of the camera and screen whereby the mirror is characterized. The system may also be provided with a solar alignment tool for comparing at least one mirror parameter of the mirror to a design geometry whereby an alignment error is defined, and at least one alignment unit for adjusting the mirror to reduce the alignment error.
NASA Astrophysics Data System (ADS)
Tsai, Tracy; Rella, Chris; Crosson, Eric
2013-04-01
Quantification of fugitive methane emissions from unconventional natural gas (i.e. shale gas, tight sand gas, etc.) production, processing, and transport is essential for scientists, policy-makers, and the energy industry, because methane has a global warming potential of at least 21 times that of carbon dioxide over a span of 100 years [1]. Therefore, fugitive emissions reduce any environmental benefits to using natural gas instead of traditional fossil fuels [2]. Current measurement techniques involve first locating all the possible leaks and then measuring the emission of each leak. This technique is a painstaking and slow process that cannot be scaled up to the large size of the natural gas industry in which there are at least half a million natural gas wells in the United States alone [3]. An alternative method is to calculate the emission of a plume through dispersion modeling. This method is a scalable approach since all the individual leaks within a natural gas facility can be aggregated into a single plume measurement. However, plume dispersion modeling requires additional knowledge of the distance to the source, atmospheric turbulence, and local topography, and it is a mathematically intensive process. Therefore, there is a need for an instrument capable of simple, rapid, and accurate measurements of fugitive methane emissions on a per well head scale. We will present the "plume camera" instrument, which simultaneously measures methane at different spatial points or pixels. The spatial correlation between methane measurements provides spatial information of the plume, and in addition to the wind measurement collected with a sonic anemometer, the flux can be determined. Unlike the plume dispersion model, this approach does not require knowledge of the distance to the source and atmospheric conditions. Moreover, the instrument can fit inside a standard car such that emission measurements can be performed on a per well head basis. In a controlled experiment with known releases from a methane tank, a 2-pixel plume camera measured 496 ± 160 sccm from a release of 650 sccm located 21 m away, and 4,180 ± 962 sccm from a release of 3,400 sccm located 49 m away. These results in addition to results with a higher-pixel camera will be discussed. Field campaign data collected with the plume camera pixels mounted onto a vehicle and driven through the natural gas fields in the Uintah Basin (Utah, United States) will also be presented along with the limitations and advantages of the instrument. References: 1. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.). IPCC, 2007: Climate Change 2007: The Physical Science Basis of the Fourth Assessment Report. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. 2. R.W. Howarth, R. Santoro, and A. Ingraffea. "Methane and the greenhouse-gas footprint of natural gas from shale formations." Climate Change, 106, 679 (2011). 3. U.S. Energy Information Administration. "Number of Producing Wells."
New Approach for Environmental Monitoring and Plant Observation Using a Light-Field Camera
NASA Astrophysics Data System (ADS)
Schima, Robert; Mollenhauer, Hannes; Grenzdörffer, Görres; Merbach, Ines; Lausch, Angela; Dietrich, Peter; Bumberger, Jan
2015-04-01
The aim of gaining a better understanding of ecosystems and the processes in nature accentuates the need for observing exactly these processes with a higher temporal and spatial resolution. In the field of environmental monitoring, an inexpensive and field applicable imaging technique to derive three-dimensional information about plants and vegetation would represent a decisive contribution to the understanding of the interactions and dynamics of ecosystems. This is particularly true for the monitoring of plant growth and the frequently mentioned lack of morphological information about the plants, e.g. plant height, vegetation canopy, leaf position or leaf arrangement. Therefore, an innovative and inexpensive light-field (plenoptic) camera, the Lytro LF, and a stereo vision system, based on two industrial cameras, were tested and evaluated as possible measurement tools for the given monitoring purpose. In this instance, the usage of a light field camera offers the promising opportunity of providing three-dimensional information without any additional requirements during the field measurements based on one single shot, which represents a substantial methodological improvement in the area of environmental research and monitoring. Since the Lytro LF was designed as a daily-life consumer camera, it does not support depth or distance estimation or rather an external triggering by default. Therefore, different technical modifications and a calibration routine had to be figured out during the preliminary study. As a result, the used light-field camera was proven suitable as a depth and distance measurement tool with a measuring range of approximately one meter. Consequently, this confirms the assumption that a light field camera holds the potential of being a promising measurement tool for environmental monitoring purposes, especially with regard to a low methodological effort in field. Within the framework of the Global Change Experimental Facility Project, founded by the Helmholtz Centre for Environmental Research, and its large-scaled field experiments to investigate the influence of the climate change on different forms of land utilization, both techniques were installed and evaluated in a long-term experiment on a pilot-scaled maize field in late 2014. Based on this, it was possible to show the growth of the plants in dependence of time, showing a good accordance to the measurements, which were carried out by hand on a weekly basis. In addition, the experiment has shown that the light-field vision approach is applicable for the monitoring of the crop growth under field conditions, although it is limited to close range applications. Since this work was intended as a proof of concept, further research is recommended, especially with respect to the automation and evaluation of data processing. Altogether, this study is addressed to researchers as an elementary groundwork to improve the usage of the introduced light field imaging technique for the monitoring of plant growth dynamics and the three-dimensional modeling of plants under field conditions.
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
Plenoptic PIV: Towards simple, robust 3D flow measurements
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Tim
2013-11-01
In this work, we report on the recent development of plenoptic PIV for the measurement of 3D flow fields. Plenoptic PIV uses a plenoptic camera to record the 4D light-field generated by a volume of particles seeded into a flow field. Plenoptic cameras are primarily known for their ability to computational refocus or change the perspective of an image after it has been acquired. In this work, we use tomographic algorithms to reconstruct a 3D volume of the particle field and apply a cross-correlation algorithm to a pair of particle volumes to determine the 3D/3C velocity field. The primary advantage of plenoptic PIV over multi-camera techniques is that it only uses a single camera, which greatly reduces the cost and simplifies a typical experimental arrangement. In addition, plenoptic PIV is capable of making measurements over dimensions on the order of 100 mm × 100 mm × 100 mm. The spatial resolution and accuracy of the technique are presented along with examples of 3D velocity data acquired in turbulent boundary layers and supersonic jets. This work was primarily supported through an AFOSR grant.
Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields.
Xiao, Zhaolin; Wang, Qing; Zhou, Guoqing; Yu, Jingyi
2017-05-01
When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, and so on. In this paper, we present a different solution that first detects and then removes angular aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the angular aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing versus non-aliasing regions and angular aliasing removal. Experiments on both synthetic scene and real light field data sets (camera array and Lytro camera) demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.
Airborne multispectral identification of individual cotton plants using consumer-grade cameras
USDA-ARS?s Scientific Manuscript database
Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...
Digital dental photography. Part 6: camera settings.
Ahmad, I
2009-07-25
Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.
Safety evaluation of red-light cameras
DOT National Transportation Integrated Search
2005-04-01
The objective of this final study was to determine the effectiveness of red-light-camera (RLC) systems in reducing crashes. The study used empirical Bayes before-and-after research using data from seven jurisdictions across the United States at 132 t...
15 CFR 742.4 - National security.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...
15 CFR 742.4 - National security.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...
15 CFR 742.4 - National security.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...
15 CFR 742.4 - National security.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Requirements” section except those cameras in ECCN 6A003.b.4.b that have a focal plane array with 111,000 or..., South Korea, Spain, Sweden, Switzerland, Turkey, and the United Kingdom for those cameras in ECCN 6A003...
Towards continuous monitoring of pulse rate in neonatal intensive care unit with a webcam.
Mestha, Lalit K; Kyal, Survi; Xu, Beilei; Lewis, Leslie Edward; Kumar, Vijay
2014-01-01
We describe a novel method to monitor pulse rate (PR) on a continuous basis of patients in a neonatal intensive care unit (NICU) using videos taken from a high definition (HD) webcam. We describe algorithms that determine PR from videoplethysmographic (VPG) signals extracted from multiple regions of interest (ROI) simultaneously available within the field of view of the camera where cardiac signal is registered. We detect motion from video images and compensate for motion artifacts from each ROI. Preliminary clinical results are presented on 8 neonates each with 30 minutes of uninterrupted video. Comparisons to hospital equipment indicate that the proposed technology can meet medical industry standards and give improved patient comfort and ease of use for practitioners when instrumented with proper hardware.
Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong
2017-02-15
Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
NASA Astrophysics Data System (ADS)
Castro Marín, J. M.; Brown, V. J. G.; López Jiménez, A. C.; Rodríguez Gómez, J.; Rodrigo, R.
2001-05-01
The optical, spectroscopic infrared remote imaging system (OSIRIS) is an instrument carried on board the European Space Agency spacecraft Rosetta that will be launched in January 2003 to study in situ the comet Wirtanen. The electronic design of the mechanism controller board (MCB) system of the two OSIRIS optical cameras, the narrow angle camera, and the wide angle camera, is described here. The system is comprised of two boards mounted on an aluminum frame as part of an electronics box that contains the power supply and the digital processor unit of the instrument. The mechanisms controlled by the MCB for each camera are the front door assembly and a filter wheel assembly. The front door assembly for each camera is driven by a four phase, permanent magnet stepper motor. Each filter wheel assembly consists of two, eight filter wheels. Each wheel is driven by a four phase, variable reluctance stepper motor. Each motor, for all the assemblies, also contains a redundant set of four stator phase windings that can be energized separately or in parallel with the main windings. All stepper motors are driven in both directions using the full step unipolar mode of operation. The MCB also performs general housekeeping data acquisition of the OSIRIS instrument, i.e., mechanism position encoders and temperature measurements. The electronic design application used is quite new due to use of a field programmable gate array electronic devices that avoid the use of the now traditional system controlled by microcontrollers and software. Electrical tests of the engineering model have been performed successfully and the system is ready for space qualification after environmental testing. This system may be of interest to institutions involved in future space experiments with similar needs for mechanisms control.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
A direct-view customer-oriented digital holographic camera
NASA Astrophysics Data System (ADS)
Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.
2018-01-01
In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.
A compact neutron scatter camera for field deployment
Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.
2016-08-23
Here, we describe a very compact (0.9 m high, 0.4 m diameter, 40 kg) battery operable neutron scatter camera designed for field deployment. Unlike most other systems, the configuration of the sixteen liquid-scintillator detection cells are arranged to provide omnidirectional (4π) imaging with sensitivity comparable to a conventional two-plane system. Although designed primarily to operate as a neutron scatter camera for localizing energetic neutron sources, it also functions as a Compton camera for localizing gamma sources. In addition to describing the radionuclide source localization capabilities of this system, we demonstrate how it provides neutron spectra that can distinguish plutonium metalmore » from plutonium oxide sources, in addition to the easier task of distinguishing AmBe from fission sources.« less
STS-31 MS Sullivan and Pilot Bolden monitor SE 82-16 Ion Arc on OV-103 middeck
NASA Technical Reports Server (NTRS)
1990-01-01
STS-31 Mission Specialist (MS) Kathryn D. Sullivan monitors and advises ground controllers of the activity inside the Student Experiment (SE) 82-16, Ion arc - studies of the effects of microgravity and a magnetic field on an electric arc, mounted in front of the middeck lockers aboard Discovery, Orbiter Vehicle (OV) 103. Pilot Charles F. Bolden uses a video camera and an ARRIFLEX motion picture camera to record the activity inside the special chamber. A sign in front of the experiment reads 'SSIP 82-16 Greg's Experiment Happy Graduation from STS-31.' SSIP stands for Shuttle Student Involvement Program. Gregory S. Peterson who developed the experiment (Greg's Experiment) is a student at Utah State University and monitored the experiment's operation from JSC's Mission Control Center (MCC) during the flight. Decals displayed in the background on the orbiter galley represent the Hubble Space Telescope (HST), the United States (U.S.) Naval Reserve, Navy Oceanographers, U.S. Navy, and Univer
NASA Astrophysics Data System (ADS)
Luquet, Ph.; Chikouche, A.; Benbouzid, A. B.; Arnoux, J. J.; Chinal, E.; Massol, C.; Rouchit, P.; De Zotti, S.
2017-11-01
EADS Astrium is currently developing a new product line of compact and versatile instruments for high resolution missions in Earth Observation. First version has been developed in the frame of the ALSAT-2 contract awarded by the Algerian Space Agency (ASAL) to EADS Astrium. The Silicon Carbide Korsch-type telescope coupled with a multilines detector array offers a 2.5 m GSD in PAN band at Nadir @ 680 km altitude (10 m GSD in the four multispectral bands) with a 17.5 km swath width. This compact camera - 340 (W) x 460 (L) x 510 (H) mm3, 13 kg - is embarked on a Myriade-type small platform. The electronics unit accommodates video, housekeeping, and thermal control functions and also a 64 Gbit mass memory. Two satellites are developed; the first one is planned to be launched on mid 2009. Several other versions of the instrument have already been defined with enhanced resolution or/and larger field of view.
Fishery research in the Great Lakes using a low-cost remotely operated vehicle
Kennedy, Gregory W.; Brown, Charles L.; Argyle, Ray L.
1988-01-01
We used a MiniROVER MK II remotely operated vehicle (ROV) to collect ground-truth information on fish and their habitat in the Great Lakes that have traditionally been collected by divers, or with static cameras, or submersibles. The ROV, powered by 4 thrusters and controlled by the pilot at the surface, was portable and efficient to operate throughout the Great Lakes in 1987, and collected a total of 30 h of video data recorded for later analysis. We collected 50% more substrate information per unit of effort with the ROV than with static cameras. Fish behavior ranged from no avoidance reaction in ambient light, to erratic responses in the vehicle lights. The ROV's field of view depended on the time of day, light levels, and density of zooplankton. Quantification of the data collected with the ROV (either physical samples or video image data) will serve to enhance the use of the ROV as a research tool to conduct fishery research on the Great Lakes.
Orbital Debris Quarterly News, Volume 13, No. 3
NASA Technical Reports Server (NTRS)
Liou, J.-C. (Editor); Shoots, Debi (Editor)
2009-01-01
This issue of the Orbital Debris Quarterly contains articles on the congressional hearing that was held on orbital debris and space traffic; the update received by the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS) on the collision of the Iridium 33 and Cosmos 2251 satellites; the micrometeoroid and orbital debris (MMOD) inspection of the Hubble Space Telescope Wide Field Planetary Camera; an analysis of the reentry survivability of the Global Precipitation Measurement (GPM) spacecraft; an update on recent major breakup fragments; and a graph showing the current debris environment in low Earth orbit.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
Harbour surveillance with cameras calibrated with AIS data
NASA Astrophysics Data System (ADS)
Palmieri, F. A. N.; Castaldo, F.; Marino, G.
The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Metrological analysis of the human foot: 3D multisensor exploration
NASA Astrophysics Data System (ADS)
Muñoz Potosi, A.; Meneses Fonseca, J.; León Téllez, J.
2011-08-01
In the podiatry field, many of the foot dysfunctions are mainly generated due to: Congenital malformations, accidents or misuse of footwear. For the treatment or prevention of foot disorders, the podiatrist diagnoses prosthesis or specific adapted footwear, according to the real dimension of foot. Therefore, it is necessary to acquire 3D information of foot with 360 degrees of observation. As alternative solution, it was developed and implemented an optical system of threedimensional reconstruction based in the principle of laser triangulation. The system is constituted by an illumination unit that project a laser plane into the foot surface, an acquisition unit with 4 CCD cameras placed around of axial foot axis, an axial moving unit that displaces the illumination and acquisition units in the axial axis direction and a processing and exploration unit. The exploration software allows the extraction of distances on three-dimensional image, taking into account the topography of foot. The optical system was tested and their metrological performances were evaluated in experimental conditions. The optical system was developed to acquire 3D information in order to design and make more appropriate footwear.
NASA Technical Reports Server (NTRS)
Barry, R. K.; Satyapal, S.; Greenhouse, M. A.; Barclay, R.; Amato, D.; Arritt, B.; Brown, G.; Harvey, V.; Holt, C.; Kuhn, J.
2000-01-01
We discuss work in progress on a near-infrared tunable bandpass filter for the Goddard baseline wide field camera concept of the Next Generation Space Telescope (NGST) Integrated Science Instrument Module (ISIM). This filter, the Demonstration Unit for Low Order Cryogenic Etalon (DULCE), is designed to demonstrate a high efficiency scanning Fabry-Perot etalon operating in interference orders 1 - 4 at 30K with a high stability DSP based servo control system. DULCE is currently the only available tunable filter for lower order cryogenic operation in the near infrared. In this application, scanning etalons will illuminate the focal plane arrays with a single order of interference to enable wide field lower resolution hyperspectral imaging over a wide range of redshifts. We discuss why tunable filters are an important instrument component in future space-based observatories.
Development of a slicer integral field unit for the existing optical imaging spectrograph FOCAS
NASA Astrophysics Data System (ADS)
Ozaki, Shinobu; Tanaka, Yoko; Hattori, Takashi; Mitsui, Kenji; Fukusima, Mitsuhiro; Okada, Norio; Obuchi, Yoshiyuki; Miyazaki, Satoshi; Yamashita, Takuya
2012-09-01
We are developing an integral field unit (IFU) with an image slicer for the existing optical imaging spectrograph, Faint Object Camera And Spectrograph (FOCAS), on the Subaru Telescope. Basic optical design has already finished. The slice width is 0.4 arcsec, slice number is 24, and field of view is 13.5x 9.6 arcsec. Sky spectra separated by about 3 arcmin from an object field can be simultaneously obtained, which allows us precise background subtraction. The IFU will be installed as a mask plate and set by the mask exchanger mechanism of FOCAS. Slice mirrors, pupil mirrors and slit mirrors are all made of glass, and their mirror surfaces are fabricated by polishing. Multilayer dielectric reflective coating with high reflectivity (< 98%) is made on each mirror surface. Slicer IFU consists of many mirrors which need to be arraigned with high accuracy. For such alignment, we will make alignment jigs and mirror holders made with high accuracy. Some pupil mirrors need off-axis ellipsoidal surfaces to reduce aberration. We are conducting some prototyping works including slice mirrors, an off-axis ellipsoidal surface, alignment jigs and a mirror support. In this paper, we will introduce our project and show those prototyping works.
Homography-based multiple-camera person-tracking
NASA Astrophysics Data System (ADS)
Turk, Matthew R.
2009-01-01
Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Hubble Sees a Legion of Galaxies
2017-12-08
Peering deep into the early universe, this picturesque parallel field observation from the NASA/ESA Hubble Space Telescope reveals thousands of colorful galaxies swimming in the inky blackness of space. A few foreground stars from our own galaxy, the Milky Way, are also visible. In October 2013 Hubble’s Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) began observing this portion of sky as part of the Frontier Fields program. This spectacular skyscape was captured during the study of the giant galaxy cluster Abell 2744, otherwise known as Pandora’s Box. While one of Hubble’s cameras concentrated on Abell 2744, the other camera viewed this adjacent patch of sky near to the cluster. Containing countless galaxies of various ages, shapes and sizes, this parallel field observation is nearly as deep as the Hubble Ultra-Deep Field. In addition to showcasing the stunning beauty of the deep universe in incredible detail, this parallel field — when compared to other deep fields — will help astronomers understand how similar the universe looks in different directions. Image credit: NASA, ESA and the HST Frontier Fields team (STScI), NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Gamma Ray Burst Optical Counterpart Search Experiment (GROCSE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, H.S.; Ables, E.; Bionta, R.M.
GROCSE (Gamma-Ray Optical Counterpart Search Experiments) is a system of automated telescopes that search for simultaneous optical activity associated with gamma ray bursts in response to real-time burst notifications provided by the BATSE/BACODINE network. The first generation system, GROCSE 1, is sensitive down to Mv {approximately} 8.5 and requires an average of 12 seconds to obtain the first images of the gamma ray burst error box defined by the BACODINE trigger. The collaboration is now constructing a second generation system which has a 4 second slewing time and can reach Mv {approximately} 14 with a 5 second exposure. GROCSE 2more » consists of 4 cameras on a single mount. Each camera views the night sky through a commercial Canon lens (f/1.8, focal length 200 mm) and utilizes a 2K x 2K Loral CCD. Light weight and low noise custom readout electronics were designed and fabricated for these CCDs. The total field of view of the 4 cameras is 17.6 x 17.6 {degree}. GROCSE II will be operated by the end of 1995. In this paper, the authors present an overview of the GROCSE system and the results of measurements with a GROCSE 2 prototype unit.« less
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.
Snyder, Keirith A; Wehan, Bryce L; Filippa, Gianluca; Huntington, Justin L; Stringham, Tamzen K; Snyder, Devon K
2016-11-18
Plant phenology is recognized as important for ecological dynamics. There has been a recent advent of phenology and camera networks worldwide. The established PhenoCam Network has sites in the United States, including the western states. However, there is a paucity of published research from semi-arid regions. In this study, we demonstrate the utility of camera-based repeat digital imagery and use of R statistical phenopix package to quantify plant phenology and phenophases in four plant communities in the semi-arid cold desert region of the Great Basin. We developed an automated variable snow/night filter for removing ephemeral snow events, which allowed fitting of phenophases with a double logistic algorithm. We were able to detect low amplitude seasonal variation in pinyon and juniper canopies and sagebrush steppe, and characterize wet and mesic meadows in area-averaged analyses. We used individual pixel-based spatial analyses to separate sagebrush shrub canopy pixels from interspace by determining differences in phenophases of sagebrush relative to interspace. The ability to monitor plant phenology with camera-based images fills spatial and temporal gaps in remotely sensed data and field based surveys, allowing species level relationships between environmental variables and phenology to be developed on a fine time scale thus providing powerful new tools for land management.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Feature-Based Approach for the Registration of Pushbroom Imagery with Existing Orthophotos
NASA Astrophysics Data System (ADS)
Xiong, Weifeng
Low-cost Unmanned Airborne Vehicles (UAVs) are rapidly becoming suitable platforms for acquiring remote sensing data for a wide range of applications. For example, a UAV-based mobile mapping system (MMS) is emerging as a novel phenotyping tool that delivers several advantages to alleviate the drawbacks of conventional manual plant trait measurements. Moreover, UAVs equipped with direct geo-referenced frame cameras and pushbroom scanners can acquire geospatial data for comprehensive high-throughput phenotyping. UAVs for mobile mapping platforms are low-cost and easy to use, can fly closer to the objects, and are filling an important gap between ground wheel-based and traditional manned-airborne platforms. However, consumer-grade UAVs are capable of carrying only equipment with a relatively light payload and their flying time is determined by a limited battery life. These restrictions of UAVs unfortunately force potential users to adopt lower-quality direct geo-referencing and imaging systems that may negatively impact the quality of the deliverables. Recent advances in sensor calibration and automated triangulation have made it feasible to obtain accurate mapping using low-cost camera systems equipped with consumer-grade GNSS/INS units. However, ortho-rectification of the data from a linear-array scanner is challenging for low-cost UAV systems, because the derived geo-location information from pushbroom sensors is quite sensitive to the performance of the implemented direct geo-referencing unit. This thesis presents a novel approach for improving the ortho-rectification of hyperspectral pushbroom scanner imagery with the aid of orthophotos generated from frame cameras through the identification of conjugate features while modeling the impact of residual artifacts in the direct geo-referencing information. The experimental results qualitatively and quantitatively proved the feasibility of the proposed methodology in improving the geo-referencing accuracy of real datasets collected over an agricultural field.
STS-61 Space Shuttle mission report
NASA Technical Reports Server (NTRS)
Fricke, Robert W., Jr.
1994-01-01
The STS-61 Space Shuttle Program Mission Report summarizes the Hubble Space Telescope (HST) servicing mission as well as the Orbiter, External Tank (ET), Solid Rocket Booster (SRB), Redesigned Solid Rocket Motor (RSRM), and the Space Shuttle main engine (SSME) systems performance during the fifty-ninth flight of the Space Shuttle Program and fifth flight of the Orbiter vehicle Endeavour (OV-105). In addition to the Orbiter, the flight vehicle consisted of an ET designated as ET-60; three SSME's which were designated as serial numbers 2019, 2033, and 2017 in positions 1, 2, and 3, respectively; and two SRB's which were designated BI-063. The RSRM's that were installed in each SRB were designated as 360L023A (lightweight) for the left SRB, and 360L023B (lightweight) for the right SRB. This STS-61 Space Shuttle Program Mission Report fulfills the Space Shuttle Program requirement as documented in NSTS 07700, Volume 8, Appendix E. That document requires that each major organizational element supporting the Program report the results of its hardware evaluation and mission performance plus identify all related in-flight anomalies. The primary objective of the STS-61 mission was to perform the first on-orbit servicing of the Hubble Space Telescope. The servicing tasks included the installation of new solar arrays, replacement of the Wide Field/Planetary Camera I (WF/PC I) with WF/PC II, replacement of the High Speed Photometer (HSP) with the Corrective Optics Space Telescope Axial Replacement (COSTAR), replacement of rate sensing units (RSU's) and electronic control units (ECU's), installation of new magnetic sensing systems and fuse plugs, and the repair of the Goddard High Resolution Spectrometer (GHRS). Secondary objectives were to perform the requirements of the IMAX Cargo Bay Camera (ICBC), the IMAX Camera, and the Air Force Maui Optical Site (AMOS) Calibration Test.
A Precision Metrology System for the Hubble Space Telescope Wide Field Camera 3 Instrument
NASA Technical Reports Server (NTRS)
Toland, Ronald W.
2003-01-01
The Wide Field Camera 3 (WFC3) instrument for the Hubble Space Telescope (HST) will replace the current Wide Field and Planetary Camera 2 (WFPC2). By providing higher throughput and sensitivity than WFPC2, and operating from the near-IR to the near-UV, WFC3 will once again bring the performance of HST above that from ground-based observatories. Crucial to the integration of the WFC3 optical bench is a pair of 2-axis cathetometers used to view targets which cannot be seen by other means when the bench is loaded into its enclosure. The setup and calibration of these cathetometers is described, along with results from a comparison of the cathetometer system with other metrology techniques.
VizieR Online Data Catalog: UKIDSS-DR7 Large Area Survey (Lawrence+ 2011)
NASA Astrophysics Data System (ADS)
UKIDSS Consortium
2012-03-01
The UKIRT Infrared Deep Sky Survey (UKIDSS) is a large-scale near-IR survey which aim is to cover 7500 square degrees of the Northern sky. The survey is carried out using the Wide Field Camera (WFCAM), with a field of view of 0.21 square degrees, mounted on the 3.8m United Kingdom Infra-red Telescope (UKIRT) in Hawaii. The Large Area Survey (LAS) covers an area of 4000 square degrees in high Galactic latitudes (extragalactic) in the four bands Y(1.0um) J(1.2um) H(1.6um) and K(2.2um) to a depth of K = 18.4. Details of the survey can be found in the in the paper by Lawrence et al. (2007MNRAS.379.1599L) (1 data file).
NASA Astrophysics Data System (ADS)
Salyer, Terry
2017-06-01
For the bulk of detonation performance experiments, a fairly basic set of diagnostic techniques has evolved as the standard for acquiring the necessary measurements. Gold standard techniques such as pin switches and streak cameras still produce the high-quality data required, yet much room remains for improvement with regard to ease of use, cost of fielding, breadth of data, and diagnostic versatility. Over the past several years, an alternate set of diagnostics has been under development to replace many of these traditional techniques. Pulse Correlation Reflectometry (PCR) is a capable substitute for pin switches with the advantage of obtaining orders of magnitude more data at a small fraction of the cost and fielding time. Spectrally Encoded Imaging (SEI) can replace most applications of streak camera with the advantage of imaging surfaces through a single optical fiber that are otherwise optically inaccessible. Such diagnostics advance the measurement state of the art, but even further improvements may come through revamping the standardized tests themselves such as the copper cylinder expansion test. At the core of this modernization, the aforementioned diagnostics play a significant role in revamping and improving the standard test suite for the present era. This research was performed under the auspices of the United States Department of Energy.
Bore-sight calibration of the profile laser scanner using a large size exterior calibration field
NASA Astrophysics Data System (ADS)
Koska, Bronislav; Křemen, Tomáš; Štroner, Martin
2014-10-01
The bore-sight calibration procedure and results of a profile laser scanner using a large size exterior calibration field is presented in the paper. The task is a part of Autonomous Mapping Airship (AMA) project which aims to create s surveying system with specific properties suitable for effective surveying of medium-wide areas (units to tens of square kilometers per a day). As is obvious from the project name an airship is used as a carrier. This vehicle has some specific properties. The most important properties are high carrying capacity (15 kg), long flight time (3 hours), high operating safety and special flight characteristics such as stability of flight, in terms of vibrations, and possibility to flight at low speed. The high carrying capacity enables using of high quality sensors like professional infrared (IR) camera FLIR SC645, high-end visible spectrum (VIS) digital camera and optics in the visible spectrum and tactical grade INSGPS sensor iMAR iTracerRT-F200 and profile laser scanner SICK LD-LRS1000. The calibration method is based on direct laboratory measuring of coordinate offset (lever-arm) and in-flight determination of rotation offsets (bore-sights). The bore-sight determination is based on the minimization of squares of individual point distances from measured planar surfaces.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Explosive Transient Camera (ETC) Program
1991-10-01
VOLTAGES 4.- VIDEO OUT CCD CLOCKING UNIT UUPSTAIRS" ELECTRONICS AND ANALOG TO DIGITAL IPR OCECSSER I COMMANDS TO DATA AND STATUS INSTRUMENT INFORMATION I...and transmits digital video and status information to the "downstairs" system. The clocking unit and regulator/driver board are the only CCD dependent...A. 1001, " Video Cam-era’CC’" tandari Piells" (1(P’ll m-norartlum, unpublished). Condon,, J.J., Puckpan, M.A., and Vachalski, J. 1970, A. J., 9U, 1149
Retinal fundus imaging with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos
2018-02-01
Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.
Radiometric calibration of wide-field camera system with an application in astronomy
NASA Astrophysics Data System (ADS)
Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika
2017-09-01
Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
Improved CPAS Photogrammetric Capabilities for Engineering Development Unit (EDU) Testing
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Bretz, David R.
2013-01-01
This paper focuses on two key improvements to the photogrammetric analysis capabilities of the Capsule Parachute Assembly System (CPAS) for the Orion vehicle. The Engineering Development Unit (EDU) system deploys Drogue and Pilot parachutes via mortar, where an important metric is the muzzle velocity. This can be estimated using a high speed camera pointed along the mortar trajectory. The distance to the camera is computed from the apparent size of features of known dimension. This method was validated with a ground test and compares favorably with simulations. The second major photogrammetric product is measuring the geometry of the Main parachute cluster during steady-state descent using onboard cameras. This is challenging as the current test vehicles are suspended by a single-point attachment unlike earlier stable platforms suspended under a confluence fitting. The mathematical modeling of fly-out angles and projected areas has undergone significant revision. As the test program continues, several lessons were learned about optimizing the camera usage, installation, and settings to obtain the highest quality imagery possible.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera
NASA Astrophysics Data System (ADS)
Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.
Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
A comparison between soft x-ray and magnetic phase data on the Madison symmetric torus
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanMeter, P. D., E-mail: pvanmeter@wisc.edu; Reusch, L. M.; Sarff, J. S.
The Soft X-Ray (SXR) tomography system on the Madison Symmetric Torus uses four cameras to determine the emissivity structure of the plasma. This structure should directly correspond to the structure of the magnetic field; however, there is an apparent phase difference between the emissivity reconstructions and magnetic field reconstructions when using a cylindrical approximation. The difference between the phase of the dominant rotating helical mode of the magnetic field and the motion of the brightest line of sight for each SXR camera is dependent on both the camera viewing angle and the plasma conditions. Holding these parameters fixed, this phasemore » difference is shown to be consistent over multiple measurements when only toroidal or poloidal magnetic field components are considered. These differences emerge from physical effects of the toroidal geometry which are not captured in the cylindrical approximation.« less
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Evaluation of area strain response of dielectric elastomer actuator using image processing technique
NASA Astrophysics Data System (ADS)
Sahu, Raj K.; Sudarshan, Koyya; Patra, Karali; Bhaumik, Shovan
2014-03-01
Dielectric elastomer actuator (DEA) is a kind of soft actuators that can produce significantly large electric-field induced actuation strain and may be a basic unit of artificial muscles and robotic elements. Understanding strain development on a pre-stretched sample at different regimes of electrical field is essential for potential applications. In this paper, we report about ongoing work on determination of area strain using digital camera and image processing technique. The setup, developed in house consists of low cost digital camera, data acquisition and image processing algorithm. Samples have been prepared by biaxially stretched acrylic tape and supported between two cardboard frames. Carbon-grease has been pasted on the both sides of the sample, which will be compliant with electric field induced large deformation. Images have been grabbed before and after the application of high voltage. From incremental image area, strain has been calculated as a function of applied voltage on a pre-stretched dielectric elastomer (DE) sample. Area strain has been plotted with the applied voltage for different pre-stretched samples. Our study shows that the area strain exhibits nonlinear relationship with applied voltage. For same voltage higher area strain has been generated on a sample having higher pre-stretched value. Also our characterization matches well with previously published results which have been done with costly video extensometer. The study may be helpful for the designers to fabricate the biaxial pre-stretched planar actuator from similar kind of materials.
NASA Astrophysics Data System (ADS)
Drass, Holger; Vanzi, Leonardo; Torres-Torriti, Miguel; Dünner, Rolando; Shen, Tzu-Chiang; Belmar, Francisco; Dauvin, Lousie; Staig, Tomás.; Antognini, Jonathan; Flores, Mauricio; Luco, Yerko; Béchet, Clémentine; Boettger, David; Beard, Steven; Montgomery, David; Watson, Stephen; Cabral, Alexandre; Hayati, Mahmoud; Abreu, Manuel; Rees, Phil; Cirasuolo, Michele; Taylor, William; Fairley, Alasdair
2016-08-01
The Multi-Object Optical and Near-infrared Spectrograph (MOONS) will cover the Very Large Telescope's (VLT) field of view with 1000 fibres. The fibres will be mounted on fibre positioning units (FPU) implemented as two-DOF robot arms to ensure a homogeneous coverage of the 500 square arcmin field of view. To accurately and fast determine the position of the 1000 fibres a metrology system has been designed. This paper presents the hardware and software design and performance of the metrology system. The metrology system is based on the analysis of images taken by a circular array of 12 cameras located close to the VLTs derotator ring around the Nasmyth focus. The system includes 24 individually adjustable lamps. The fibre positions are measured through dedicated metrology targets mounted on top of the FPUs and fiducial markers connected to the FPU support plate which are imaged at the same time. A flexible pipeline based on VLT standards is used to process the images. The position accuracy was determined to 5 μm in the central region of the images. Including the outer regions the overall positioning accuracy is 25 μm. The MOONS metrology system is fully set up with a working prototype. The results in parts of the images are already excellent. By using upcoming hardware and improving the calibration it is expected to fulfil the accuracy requirement over the complete field of view for all metrology cameras.
Electro-optical system for gunshot detection: analysis, concept, and performance
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Madura, H.; Trzaskawka, P.; Bieszczad, G.; Sosnowski, T.
2011-08-01
The paper discusses technical possibilities to build an effective electro-optical sensor unit for sniper detection using infrared cameras. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. At first, the analysis was presented of three distinguished phases of sniper activity: before, during and after the shot. On the basis of experimental data the parameters defining the relevant sniper signatures were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets and the descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. The analyzed infrared systems were simulated using NVTherm software. The calculations for several cameras, equipped with different lenses and detector types were performed. The simulation of detection ranges was performed for the selected scenarios of sniper detection tasks. After the analysis of simulation results, the technical specifications of infrared sniper detection system were discussed, required to provide assumed detection range. Finally the infrared camera setup was proposed which can detected sniper from 1000 meters range.
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Lee, Yeon Jin; Dille, Michael
2016-01-01
Handheld astronaut photography of the Earth has been collected from the International Space Station (ISS) since 2000, making it the most temporally extensive remotely sensed dataset from this unique Low Earth orbital platform. Exclusive use of digital handheld cameras to perform Earth observations from the ISS began in 2004. Nadir viewing imagery is constrained by the inclined equatorial orbit of the ISS to between 51.6 degrees North and South latitude, however numerous oblique images of land surfaces above these latitudes are included in the dataset. While unmodified commercial off-the-shelf digital cameras provide only visible wavelength, three-band spectral information of limited quality current cameras used with long (400+ mm) lenses can obtain high quality spatial information approaching 2 meters/ground pixel resolution. The dataset is freely available online at the Gateway to Astronaut Photography of Earth site (http://eol.jsc.nasa.gov), and now comprises over 2 million images. Despite this extensive image catalog, use of the data for scientific research, disaster response, commercial applications and visualizations is minimal in comparison to other data collected from free-flying satellite platforms such as Landsat, Worldview, etc. This is due primarily to the lack of fully-georeferenced data products - while current digital cameras typically have integrated GPS, this does not function in the Low Earth Orbit environment. The Earth Science and Remote Sensing (ESRS) Unit at NASA Johnson Space Center provides training in Earth Science topics to ISS crews, performs daily operations and Earth observation target delivery to crews through the Crew Earth Observations (CEO) Facility on board ISS, and also catalogs digital handheld imagery acquired from orbit by manually adding descriptive metadata and determining an image geographic centerpoint using visual feature matching with other georeferenced data, e.g. Landsat, Google Earth, etc. The lack of full geolocation information native to the data makes it difficult to integrate astronaut photographs with other georeferenced data to facilitate quantitative analysis such as urban land cover/land use classification, change detection, or geologic mapping. The manual determination of image centerpoints is both time and labor-intensive, leading to delays in releasing geolocated and cataloged data to the public, such as the timely use of data for disaster response. The GeoCam Space project was funded by the ISS Program in 2015 to develop an on-orbit hardware and ground-based software system for increasing the efficiency of geolocating astronaut photographs from the ISS (Fig. 1). The Intelligent Robotics Group at NASA Ames Research Center leads the development of both the ground and on-orbit systems in collaboration with the ESRS Unit. The hardware component consists of modified smartphone elements including cameras, central processing unit, wireless Ethernet, and an inertial measurement unit (gyroscopes/accelerometers/magnetometers) reconfigured into a compact unit that attaches to the base of the current Nikon D4 camera - and its replacement, the Nikon D5 - and connects using the standard Nikon peripheral connector or USB port. This provides secondary, side and downward facing cameras perpendicular to the primary camera pointing direction. The secondary cameras observe calibration targets with known internal X, Y, and Z position affixed to the interior of the ISS to determine the camera pose corresponding to each image frame. This information is recorded by the GeoCam Space unit and indexed for correlation to the camera time recorded for each image frame. Data - image, EXIF header, and camera pose information - is transmitted to the ground software system (GeoRef) using the established Ku-band USOS downlink system. Following integration on the ground, the camera pose information provides an initial geolocation estimate for the individual film frame. This new capability represents a significant advance in geolocation from the manual feature-matching approach for both nadir and off-nadir viewing imagery. With the initial geolocation estimate, full georeferencing of an image is completed using the rapid tie-pointing interface in GeoRef, and the resulting data is added to the Gateway to Astronaut Photography of Earth online database in both Geotiff and Keyhole Markup Language (kml) formats. The integration of the GeoRef software component of Geocam Space into the CEO image cataloging workflow is complete, and disaster response imagery acquired by the ISS crew is now fully georeferenced as a standard data product. The on-orbit hardware component (GeoSens) is in final prototyping phase, and is on-schedule for launch to the ISS in late 2016. Installation and routine use of the Geocam Space system for handheld digital camera photography from the ISS is expected to significantly improve the usefulness of this unique dataset for a variety of public- and private-sector applications.
Comparison of 10 digital SLR cameras for orthodontic photography.
Bister, D; Mordarai, F; Aveling, R M
2006-09-01
Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.
Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2018-05-01
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
Restoring the spatial resolution of refocus images on 4D light field
NASA Astrophysics Data System (ADS)
Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok
2010-01-01
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
Nondestructive evaluation using dipole model analysis with a scan type magnetic camera
NASA Astrophysics Data System (ADS)
Lee, Jinyi; Hwang, Jiseong
2005-12-01
Large structures such as nuclear power, thermal power, chemical and petroleum refining plants are drawing interest with regard to the economic aspect of extending component life in respect to the poor environment created by high pressure, high temperature, and fatigue, securing safety from corrosion and exceeding their designated life span. Therefore, technology that accurately calculates and predicts degradation and defects of aging materials is extremely important. Among different methods available, nondestructive testing using magnetic methods is effective in predicting and evaluating defects on the surface of or surrounding ferromagnetic structures. It is important to estimate the distribution of magnetic field intensity for applicable magnetic methods relating to industrial nondestructive evaluation. A magnetic camera provides distribution of a quantitative magnetic field with a homogeneous lift-off and spatial resolution. It is possible to interpret the distribution of magnetic field when the dipole model was introduced. This study proposed an algorithm for nondestructive evaluation using dipole model analysis with a scan type magnetic camera. The numerical and experimental considerations of the quantitative evaluation of several sizes and shapes of cracks using magnetic field images of the magnetic camera were examined.
NASA Astrophysics Data System (ADS)
Zhang, Bing; Li, Kunyang
2018-02-01
The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.
Context-based handover of persons in crowd and riot scenarios
NASA Astrophysics Data System (ADS)
Metzler, Jürgen
2015-02-01
In order to control riots in crowds, it is helpful to get ringleaders under control and pull them out of the crowd if one has become an offender. A great support to achieve these tasks is the capability of observing the crowd and ringleaders automatically by using cameras. It also allows a better conservation of evidence in riot control. A ringleader who has become an offender should be tracked across and recognized by several cameras, regardless of whether overlapping camera's fields of view exist or not. We propose a context-based approach for handover of persons between different camera fields of view. This approach can be applied for overlapping as well as for non-overlapping fields of view, so that a fast and accurate identification of individual persons in camera networks is feasible. Within the scope of this paper, the approach is applied to a handover of persons between single images without having any temporal information. It is particularly developed for semiautomatic video editing and a handover of persons between cameras in order to improve conservation of evidence. The approach has been developed on a dataset collected during a Crowd and Riot Control (CRC) training of the German armed forces. It consists of three different levels of escalation. First, the crowd started with a peaceful demonstration. Later, there were violent protests, and third, the riot escalated and offenders bumped into the chain of guards. One result of the work is a reliable context-based method for person re-identification between single images of different camera fields of view in crowd and riot scenarios. Furthermore, a qualitative assessment shows that the use of contextual information can support this task additionally. It can decrease the needed time for handover and the number of confusions which supports the conservation of evidence in crowd and riot scenarios.
New Modular Camera No Ordinary Joe
NASA Technical Reports Server (NTRS)
2003-01-01
Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.
Monitoring tigers with confidence.
Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark
2010-12-01
With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. © 2010 ISZS, Blackwell Publishing and IOZ/CAS.
The advantages of using a Lucky Imaging camera for observations of microlensing events
NASA Astrophysics Data System (ADS)
Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus
2016-05-01
In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky Imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, for an Earth-mass planet in the resonance regime, where the detection probability in crowded fields is smaller, Lucky Imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.
Game theoretic approach for cooperative feature extraction in camera networks
NASA Astrophysics Data System (ADS)
Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco
2016-07-01
Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.
19 CFR 210.39 - In camera treatment of confidential information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 3 2010-04-01 2010-04-01 false In camera treatment of confidential information. 210.39 Section 210.39 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE ADJUDICATION AND ENFORCEMENT Prehearing Conferences and Hearings § 210...
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Wide field/planetary camera optics study. [for the large space telescope
NASA Technical Reports Server (NTRS)
1979-01-01
Design feasibility of the baseline optical design concept was established for the wide field/planetary camera (WF/PC) and will be used with the space telescope (ST) to obtain high angular resolution astronomical information over a wide field. The design concept employs internal optics to relay the ST image to a CCD detector system. Optical design performance predictions, sensitivity and tolerance analyses, manufacturability of the optical components, and acceptance testing of the two mirror Cassegrain relays are discussed.
A demonstration of a low cost approach to security at shipping facilities and ports
NASA Astrophysics Data System (ADS)
Huck, Robert C.; Al Akkoumi, Mouhammad K.; Herath, Ruchira W.; Sluss, James J., Jr.; Radhakrishnan, Sridhar; Landers, Thomas L.
2010-04-01
Government funding for the security at shipping facilities and ports is limited so there is a need for low cost scalable security systems. With over 20 million sea, truck, and rail containers entering the United States every year, these facilities pose a large risk to security. Securing these facilities and monitoring the variety of traffic that enter and leave is a major task. To accomplish this, the authors have developed and fielded a low cost fully distributed building block approach to port security at the inland Port of Catoosa in Oklahoma. Based on prior work accomplished in the design and fielding of an intelligent transportation system in the United States, functional building blocks, (e.g. Network, Camera, Sensor, Display, and Operator Console blocks) can be assembled, mixed and matched, and scaled to provide a comprehensive security system. The following functions are demonstrated and scaled through analysis and demonstration: Barge tracking, credential checking, container inventory, vehicle tracking, and situational awareness. The concept behind this research is "any operator on any console can control any device at any time."
Jaguar surveying and monitoring in the United States
Culver, Melanie
2016-06-10
This project established and implemented a noninvasive system for detecting and monitoring jaguars. The study area incorporates most of the mountainous areas north of the United States-Mexico international border and south of Interstate 10, from the Baboquivari Mountains in Arizona to the Animas Mountains in New Mexico. We used two primary methods to detect exact jaguar locations: paired motion-sensor trail cameras, and genetic testing of large carnivore scat collected in the field. We emphasize that this project used entirely noninvasive methods and no jaguars were captured, radiocollared, baited, or harassed in any way. Scat sample collection occurred during the entire field part of the study, but was intensified with the use of a trained scat detection dog following the first jaguar photo detection event (photo detection event was October 2012, scat detection dog began working January 2013). We also collected weather, vegetation, and geographic information system (GIS) data to analyze in conjunction with photo and video data. The results of this study are intended to aid and inform future management and conservation practices for jaguars and ocelots in this region.
Wallace, Robert; Ayala, Guido; Viscarra, Maria
2012-12-01
Lowland tapir distribution is described in northwestern Bolivia and southeastern Peru within the Greater Madidi-Tambopata Landscape, a priority Tapir Conservation Unit, using 1255 distribution points derived from camera trapping efforts, field research and interviews with park guards from 5 national protected areas and hunters from 19 local communities. A total of 392 independent camera trapping events from 14 camera trap surveys at 11 sites demonstrated the nocturnal and crepuscular activity patterns (86%) of the lowland tapir and provide 3 indices of relative abundance for spatial and temporal comparison. Capture rates for lowland tapirs were not significantly different between camera trapping stations placed on river beaches versus those placed in the forest. Lowland tapir capture rates were significantly higher in the national protected areas of the region versus indigenous territories and unprotected portions of the landscape. Capture rates through time suggested that lowland tapir populations are recovering within the Tuichi Valley, an area currently dedicated towards ecotourism activities, following the creation (1995) and subsequent implementation (1997) of the Madidi National Park in Bolivia. Based on our distributional data and published conservative estimates of population density, we calculated that this transboundary landscape holds an overall lowland tapir population of between 14 540 and 36 351 individuals, of which at least 24.3% are under protection from national and municipal parks. As such, the Greater Madidi-Tambopata Landscape should be considered a lowland tapir population stronghold and priority conservation efforts are discussed in order to maintain this population. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera
NASA Astrophysics Data System (ADS)
Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi
2015-12-01
This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
Camera systems in human motion analysis for biomedical applications
NASA Astrophysics Data System (ADS)
Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.
2015-05-01
Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flaugher, B.; Diehl, H. T.; Alvarez, O.
2015-11-15
The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less
Flaugher, B.
2015-04-11
The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less
Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras
NASA Astrophysics Data System (ADS)
Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich
2018-01-01
A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA) (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, USA, (behind the stand) and NASA’s Richard Parker (seated) watch the images on a monitor to inspect for corrosion.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, NASA’s Richard Parker (below left) and Peggy Ritchie, with USA, (at right) watch the images on a monitor to inspect for corrosion.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, with USA, (behind the stand) and NASA’s Richard Parker watch the images on a monitor to inspect for corrosion.
NASA Astrophysics Data System (ADS)
Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki
2011-12-01
This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.
Two bright fireballs over Great Britain
NASA Astrophysics Data System (ADS)
Koukal, Jakub; Káčerek, Richard
2018-02-01
On November 24, 2017 shortly before midnight and on November 25, 2017 shortly before sunrise, two very bright fireballs lit up the sky over the United Kingdom. The UKMON (United Kingdom Meteor Observation Network) cameras and onboard cameras in the automobiles recorded their flight. The fireballs paths in the Earth's atmosphere were calculated, as well as the orbits of bodies in the Solar System. The flight of both bodies, the absolute magnitude of which approached the brightness of the full Moon, was also observed by numerous random observers from the public in Great Britain, Ireland and France.
Advanced and tendencies in the development of display technologies
NASA Astrophysics Data System (ADS)
Kompanets, I. N.
2006-06-01
Advances and key display applications are discussed. Computer, compact mobile, TV and collective large screen displays are mentioned. Flat panel displays step on CRT devices to leave them behind in 2007. Materials, active matricies and applications of bright radiative field emission and organic LED displays are developing successively and pressing other technologies to be used in photo-cameras, cellular phones, auto-cars and avionics. Progress in flexible screens can substantially extend the display design and application soon. 3D display systems are under intensive development, and laser is an important unit in some vaiants of holographic and volumetric 3D displays. Value forecast of different display markets is presented.
Pilot Kent Rominger floats in tunnel
1995-10-24
STS073-E-5053 (26 Oct. 1995) --- Astronaut Kent V. Rominger, STS-73 pilot, floats through a tunnel connecting the space shuttle Columbia's cabin and its science module. Rominger is one of seven crewmembers in the midst of a 16-day multi-faceted mission aboard Columbia. For the next week and a half, the crew will continue working in shifts around the clock on a diverse assortment of United States Microgravity Laboratory (USML-2) experiments located in the science module. Fields of study include fluid physics, materials science, biotechnology, combustion science and commercial space processing technologies. The frame was exposed with an Electronic Still Camera (ESC).
Hand-Held Self-Maneuvering Unit to be used during EVA on Gemini 4
1965-06-02
Hand-Held Self-Maneuvering Unit to be used during extravehicular activity (EVA) on Gemini 4 flight. It is an integral unit that contains its own high pressure metering valves and nozzles required to produce controlled thrust. A camera is mounted on the front of the unit.
Plenoptic background oriented schlieren imaging
NASA Astrophysics Data System (ADS)
Klemkowsky, Jenna N.; Fahringer, Timothy W.; Clifford, Christopher J.; Bathel, Brett F.; Thurow, Brian S.
2017-09-01
The combination of the background oriented schlieren (BOS) technique with the unique imaging capabilities of a plenoptic camera, termed plenoptic BOS, is introduced as a new addition to the family of schlieren techniques. Compared to conventional single camera BOS, plenoptic BOS is capable of sampling multiple lines-of-sight simultaneously. Displacements from each line-of-sight are collectively used to build a four-dimensional displacement field, which is a vector function structured similarly to the original light field captured in a raw plenoptic image. The displacement field is used to render focused BOS images, which qualitatively are narrow depth of field slices of the density gradient field. Unlike focused schlieren methods that require manually changing the focal plane during data collection, plenoptic BOS synthetically changes the focal plane position during post-processing, such that all focal planes are captured in a single snapshot. Through two different experiments, this work demonstrates that plenoptic BOS is capable of isolating narrow depth of field features, qualitatively inferring depth, and quantitatively estimating the location of disturbances in 3D space. Such results motivate future work to transition this single-camera technique towards quantitative reconstructions of 3D density fields.
Analysis of filament statistics in fast camera data on MAST
NASA Astrophysics Data System (ADS)
Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James
2017-10-01
Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.
A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA
NASA Astrophysics Data System (ADS)
Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred
2016-08-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.
Applications of Action Cam Sensors in the Archaeological Yard
NASA Astrophysics Data System (ADS)
Pepe, M.; Ackermann, S.; Fregonese, L.; Fassi, F.; Adami, A.
2018-05-01
In recent years, special digital cameras called "action camera" or "action cam", have become popular due to their low price, smallness, lightness, strength and capacity to make videos and photos even in extreme environment surrounding condition. Indeed, these particular cameras have been designed mainly to capture sport actions and work even in case of dirt, bumps, or underwater and at different external temperatures. High resolution of Digital single-lens reflex (DSLR) cameras are usually preferred to be employed in photogrammetric field. Indeed, beyond the sensor resolution, the combination of such cameras with fixed lens with low distortion are preferred to perform accurate 3D measurements; at the contrary, action cameras have small and wide-angle lens, with a lower performance in terms of sensor resolution, lens quality and distortions. However, by considering the characteristics of the action cameras to acquire under conditions that may result difficult for standard DSLR cameras and because of their lower price, these could be taken into consideration as a possible and interesting approach during archaeological excavation activities to document the state of the places. In this paper, the influence of lens radial distortion and chromatic aberration on this type of cameras in self-calibration mode and an evaluation of their application in the field of Cultural Heritage will be investigated and discussed. Using a suitable technique, it has been possible to improve the accuracy of the 3D model obtained by action cam images. Case studies show the quality and the utility of the use of this type of sensor in the survey of archaeological artefacts.
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
Camera-based micro interferometer for distance sensing
NASA Astrophysics Data System (ADS)
Will, Matthias; Schädel, Martin; Ortlepp, Thomas
2017-12-01
Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.
Improved iris localization by using wide and narrow field of view cameras for iris recognition
NASA Astrophysics Data System (ADS)
Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung
2013-10-01
Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.
Spherical visual system for real-time virtual reality and surveillance
NASA Astrophysics Data System (ADS)
Chen, Su-Shing
1998-12-01
A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.
Final Optical Design of PANIC, a Wide-Field Infrared Camera for CAHA
NASA Astrophysics Data System (ADS)
Cárdenas, M. C.; Gómez, J. Rodríguez; Lenzen, R.; Sánchez-Blanco, E.
We present the Final Optical Design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Ritchey-Chrtien focus of the Calar Alto 2.2 m telescope. This will be the first instrument built under the German-Spanish consortium that manages the Calar Alto observatory. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. The optical design produces a well defined internal pupil available to reducing the thermal background by a cryogenic pupil stop. A mosaic of four detectors Hawaii 2RG of 2 k ×2 k, made by Teledyne, will give a field of view of 31.9 arcmin ×31.9 arcmin.
A filter spectrometer concept for facsimile cameras
NASA Technical Reports Server (NTRS)
Jobson, D. J.; Kelly, W. L., IV; Wall, S. D.
1974-01-01
A concept which utilizes interference filters and photodetector arrays to integrate spectrometry with the basic imagery function of a facsimile camera is described and analyzed. The analysis considers spectral resolution, instantaneous field of view, spectral range, and signal-to-noise ratio. Specific performance predictions for the Martian environment, the Viking facsimile camera design parameters, and a signal-to-noise ratio for each spectral band equal to or greater than 256 indicate the feasibility of obtaining a spectral resolution of 0.01 micrometers with an instantaneous field of view of about 0.1 deg in the 0.425 micrometers to 1.025 micrometers range using silicon photodetectors. A spectral resolution of 0.05 micrometers with an instantaneous field of view of about 0.6 deg in the 1.0 to 2.7 micrometers range using lead sulfide photodetectors is also feasible.
NASA Astrophysics Data System (ADS)
Chibunichev, A. G.; Kurkov, V. M.; Smirnov, A. V.; Govorov, A. V.; Mikhalin, V. A.
2016-10-01
Nowadays, aerial survey technology using aerial systems based on unmanned aerial vehicles (UAVs) becomes more popular. UAVs physically can not carry professional aerocameras. Consumer digital cameras are used instead. Such cameras usually have rolling, lamellar or global shutter. Quite often manufacturers and users of such aerial systems do not use camera calibration. In this case self-calibration techniques are used. However such approach is not confirmed by extensive theoretical and practical research. In this paper we compare results of phototriangulation based on laboratory, test-field or self-calibration. For investigations we use Zaoksky test area as an experimental field provided dense network of target and natural control points. Racurs PHOTOMOD and Agisoft PhotoScan software were used in evaluation. The results of investigations, conclusions and practical recommendations are presented in this article.
Automatic multi-camera calibration for deployable positioning systems
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan
2012-06-01
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
The suitability of lightfield camera depth maps for coordinate measurement applications
NASA Astrophysics Data System (ADS)
Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael
2015-12-01
Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.
NASA Astrophysics Data System (ADS)
Ceylan Koydemir, Hatice; Bogoch, Isaac I.; Tseng, Derek; Ephraim, Richard K. D.; Duah, Evans; Tee, Joseph; Andrews, Jason R.; Ozcan, Aydogan
2016-03-01
Schistosomiasis is a parasitic and neglected tropical disease, and affects <200-million people across the world, with school-aged children disproportionately affected. Here we present field-testing results of a handheld and cost effective smartphone-based microscope in rural Ghana, Africa, for point-of-care diagnosis of S. haematobium infection. In this mobile-phone microscope, a custom-designed 3D printed opto-mechanical attachment (~150g) is placed in contact with the smartphone camera-lens, creating an imaging-system with a half-pitch resolution of ~0.87µm. This unit includes an external lens (also taken from a mobile-phone camera), a sample tray, a z-stage to adjust the focus, two light-emitting-diodes (LEDs) and two diffusers for uniform illumination of the sample. In our field-testing, 60 urine samples, collected from children, were used, where the prevalence of the infection was 72.9%. After concentration of the sample with centrifugation, the sediment was placed on a glass-slide and S. haematobium eggs were first identified/quantified using conventional benchtop microscopy by an expert diagnostician, and then a second expert, blinded to these results, determined the presence/absence of eggs using our mobile-phone microscope. Compared to conventional microscopy, our mobile-phone microscope had a diagnostic sensitivity of 72.1%, specificity of 100%, positive-predictive-value of 100%, and a negative-predictive-value of 57.1%. Furthermore, our mobile-phone platform demonstrated a sensitivity of 65.7% and 100% for low-intensity infections (≤50 eggs/10 mL urine) and high-intensity infections (<50 eggs/10 mL urine), respectively. We believe that this cost-effective and field-portable mobile-phone microscope may play an important role in the diagnosis of schistosomiasis and various other global health challenges.
Thompson, Alison L.; Thorp, Kelly R.; Conley, Matthew; Andrade-Sanchez, Pedro; Heun, John T.; Dyer, John M.; White, Jeffery W.
2018-01-01
Field-based high-throughput phenotyping is an emerging approach to quantify difficult, time-sensitive plant traits in relevant growing conditions. Proximal sensing carts represent an alternative platform to more costly high-clearance tractors for phenotyping dynamic traits in the field. A proximal sensing cart and specifically a deployment protocol, were developed to phenotype traits related to drought tolerance in the field. The cart-sensor package included an infrared thermometer, ultrasonic transducer, multi-spectral reflectance sensor, weather station, and RGB cameras. The cart deployment protocol was evaluated on 35 upland cotton (Gossypium hirsutum L.) entries grown in 2017 at Maricopa, AZ, United States. Experimental plots were grown under well-watered and water-limited conditions using a (0,1) alpha lattice design and evaluated in June and July. Total collection time of the 0.87 hectare field averaged 2 h and 27 min and produced 50.7 MB and 45.7 GB of data from the sensors and RGB cameras, respectively. Canopy temperature, crop water stress index (CWSI), canopy height, normalized difference vegetative index (NDVI), and leaf area index (LAI) differed among entries and showed an interaction with the water regime (p < 0.05). Broad-sense heritability (H2) estimates ranged from 0.097 to 0.574 across all phenotypes and collections. Canopy cover estimated from RGB images increased with counts of established plants (r = 0.747, p = 0.033). Based on the cart-derived phenotypes, three entries were found to have improved drought-adaptive traits compared to a local adapted cultivar. These results indicate that the deployment protocol developed for the cart and sensor package can measure multiple traits rapidly and accurately to characterize complex plant traits under drought conditions. PMID:29868041
An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch
NASA Astrophysics Data System (ADS)
Schuster, Norbert
2015-09-01
Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.
Inflight Radiometric Calibration of New Horizons' Multispectral Visible Imaging Camera (MVIC)
NASA Technical Reports Server (NTRS)
Howett, C. J. A.; Parker, A. H.; Olkin, C. B.; Reuter, D. C.; Ennico, K.; Grundy, W. M.; Graps, A. L.; Harrison, K. P.; Throop, H. B.; Buie, M. W.;
2016-01-01
We discuss two semi-independent calibration techniques used to determine the inflight radiometric calibration for the New Horizons Multi-spectral Visible Imaging Camera (MVIC). The first calibration technique compares the measured number of counts (DN) observed from a number of well calibrated stars to those predicted using the component-level calibration. The ratio of these values provides a multiplicative factor that allows a conversation between the preflight calibration to the more accurate inflight one, for each detector. The second calibration technique is a channel-wise relative radiometric calibration for MVIC's blue, near-infrared and methane color channels using Hubble and New Horizons observations of Charon and scaling from the red channel stellar calibration. Both calibration techniques produce very similar results (better than 7% agreement), providing strong validation for the techniques used. Since the stellar calibration described here can be performed without a color target in the field of view and covers all of MVIC's detectors, this calibration was used to provide the radiometric keyword values delivered by the New Horizons project to the Planetary Data System (PDS). These keyword values allow each observation to be converted from counts to physical units; a description of how these keyword values were generated is included. Finally, mitigation techniques adopted for the gain drift observed in the near-infrared detector and one of the panchromatic framing cameras are also discussed.
Hubble Space Telescope: Wide field and planetary camera instrument handbook. Version 2.1
NASA Technical Reports Server (NTRS)
Griffiths, Richard (Editor)
1990-01-01
An overview is presented of the development and construction of the Wide Field and Planetary Camera (WF/PC). The WF/PC is a duel two dimensional spectrophotometer with rudimentary polarimetric and transmission grating capabilities. The instrument operates from 1150 to 11000 A with a resolution of 0.1 arcsec per pixel or 0.043 arcsec per pixel. Data products and standard calibration methods are briefly summarized.
SHOK—The First Russian Wide-Field Optical Camera in Space
NASA Astrophysics Data System (ADS)
Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.
2018-02-01
Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Depth estimation using a lightfield camera
NASA Astrophysics Data System (ADS)
Roper, Carissa
The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.
3D display for enhanced tele-operation and other applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-04-01
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph
2015-01-01
The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.
The multifocus plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Lumsdaine, Andrew
2012-01-01
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
The TolTEC Camera for the LMT Telescope
NASA Astrophysics Data System (ADS)
Bryan, Sean
2018-01-01
TolTEC is a new camera being built for the 50-meter Large Millimeter-wave Telescope (LMT) on Sierra Negra in Puebla, Mexico. The instrument will discover and characterize distant galaxies by detecting the thermal emission of dust heated by starlight. The polarimetric capabilities of the camera will measure magnetic fields in star-forming regions in the Milky Way. The optical design of the camera uses mirrors, lenses, and dichroics to simultaneously couple a 4 arcminute diameter field of view onto three single-band focal planes at 150, 220, and 280 GHz. The 7000 polarization-selective detectors are single-band horn-coupled LEKID detectors fabricated at NIST. A rotating half wave plate operates at ambient temperature to modulate the polarized signal. In addition to the galactic and extragalactic surveys already planned, TolTEC installed at the LMT will provide open observing time to the community.
Television Cameras in Congress. Freedom of Information Center Report No. 483.
ERIC Educational Resources Information Center
Watt, Phyllis
While the United States Senate debates the merits of televising its proceedings, it might consider as a model the House of Representatives, which has televised floor activities since 1979 with no dramatic changes in those activities or in members' behavior. The House system consists of inconspicuously placed cameras and microphones operated by…
Geomorphologic mapping of the lunar crater Tycho and its impact melt deposits
NASA Astrophysics Data System (ADS)
Krüger, T.; van der Bogert, C. H.; Hiesinger, H.
2016-07-01
Using SELENE/Kaguya Terrain Camera and Lunar Reconnaissance Orbiter Camera (LROC) data, we produced a new, high-resolution (10 m/pixel), geomorphological and impact melt distribution map for the lunar crater Tycho. The distal ejecta blanket and crater rays were investigated using LROC wide-angle camera (WAC) data (100 m/pixel), while the fine-scale morphologies of individual units were documented using high resolution (∼0.5 m/pixel) LROC narrow-angle camera (NAC) frames. In particular, Tycho shows a large coherent melt sheet on the crater floor, melt pools and flows along the terraced walls, and melt pools on the continuous ejecta blanket. The crater floor of Tycho exhibits three distinct units, distinguishable by their elevation and hummocky surface morphology. The distribution of impact melt pools and ejecta, as well as topographic asymmetries, support the formation of Tycho as an oblique impact from the W-SW. The asymmetric ejecta blanket, significantly reduced melt emplacement uprange, and the depressed uprange crater rim at Tycho suggest an impact angle of ∼25-45°.
Exact optics - III. Schwarzschild's spectrograph camera revised
NASA Astrophysics Data System (ADS)
Willstrop, R. V.
2004-03-01
Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2011-01-01
A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…
Wang, Benquan; Toslak, Devrim; Alam, Minhaj Nur; Chan, R V Paul; Yao, Xincheng
2018-06-08
In conventional fundus photography, trans-pupillary illumination delivers illuminating light to the interior of the eye through the peripheral area of the pupil, and only the central part of the pupil can be used for collecting imaging light. Therefore, the field of view of conventional fundus cameras is limited, and pupil dilation is required for evaluating the retinal periphery which is frequently affected by diabetic retinopathy (DR), retinopathy of prematurity (ROP), and other chorioretinal conditions. We report here a nonmydriatic wide field fundus camera employing trans-pars-planar illumination which delivers illuminating light through the pars plana, an area outside of the pupil. Trans-pars-planar illumination frees the entire pupil for imaging purpose only, and thus wide field fundus photography can be readily achieved with less pupil dilation. For proof-of-concept testing, using all off-the-shelf components a prototype instrument that can achieve 90° fundus view coverage in single-shot fundus images, without the need of pharmacologic pupil dilation was demonstrated.
Miniature wide field-of-view star trackers for spacecraft attitude sensing and navigation
NASA Technical Reports Server (NTRS)
Mccarty, William; Curtis, Eric; Hull, Anthony; Morgan, William
1993-01-01
Introducing a family of miniature, wide field-of-view star trackers for low cost, high performance spacecraft attitude determination and navigation applications. These devices, derivative of the WFOV Star Tracker Camera developed cooperatively by OCA Applied Optics and the Lawrence Livermore National Laboratory for the Brilliant Pebbles program, offer a suite of options addressing a wide range of spacecraft attitude measurement and control requirements. These sensors employ much wider fields than are customary (ranging between 20 and 60 degrees) to assure enough bright stars for quick and accurate attitude determinations without long integration intervals. The key benefit of this approach are light weight, low power, reduced data processing loads and high information carrier rates for wide ACS bandwidths. Devices described range from the proven OCA/LLNL WFOV Star Tracker Camera (a low-cost, space-qualified star-field imager utilizing the spacecraft's own computer and centroiding and position-finding), to a new autonomous subsystem design featuring dual-redundant cameras and completely self-contained star-field data processing with output quaternion solutions accurate to 100 micro-rad, 3 sigma, for stand-alone applications.
Multi-scale auroral observations in Apatity: winter 2010-2011
NASA Astrophysics Data System (ADS)
Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.
2012-03-01
Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.
Multi-scale auroral observations in Apatity: winter 2010-2011
NASA Astrophysics Data System (ADS)
Kozelov, B. V.; Pilgaev, S. V.; Borovkov, L. P.; Yurov, V. E.
2011-12-01
Routine observations of the aurora are conducted in Apatity by a set of five cameras: (i) all-sky TV camera Watec WAT-902K (1/2"CCD) with Fujinon lens YV2.2 × 1.4A-SA2; (ii) two monochromatic cameras Guppy F-044B NIR (1/2"CCD) with Fujinon HF25HA-1B (1:1.4/25 mm) lens for 18° field of view and glass filter 558 nm; (iii) two color cameras Guppy F-044C NIR (1/2"CCD) with Fujinon DF6HA-1B (1:1.2/6 mm) lens for 67° field of view. The observational complex is aimed at investigating spatial structure of the aurora, its scaling properties, and vertical distribution in the rayed forms. The cameras were installed on the main building of the Apatity division of the Polar Geophysical Institute and at the Apatity stratospheric range. The distance between these sites is nearly 4 km, so the identical monochromatic cameras can be used as a stereoscopic system. All cameras are accessible and operated remotely via Internet. For 2010-2011 winter season the equipment was upgraded by special blocks of GPS-time triggering, temperature control and motorized pan-tilt rotation mounts. This paper presents the equipment, samples of observed events and the web-site with access to available data previews.
Multiplexed time-lapse photomicrography of cultured cells.
Heye, R R; Kiebler, E W; Arnzen, R J; Tolmach, L J
1982-01-01
A system of cinemicrography has been developed in which a single microscope and 16 mm camera are multiplexed to produce a time-lapse photographic record of many fields simultaneously. The field coordinates and focus are selected via a control console and entered into the memory of a dedicated microcomputer; they are then automatically recalled in sequence, thus permitting the photographing of additional fields in the interval between exposures of any given field. Sequential exposures of each field are isolated in separate sections of the film by means of a specially designed random-access camera that is also controlled by the microcomputer. The need to unscramble frames is thereby avoided, and the developed film can be directly analysed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Connor, J., Cradick, J.
In fiscal year 2012, it was desired to combine a visible spectrometer with a streak camera to form a diagnostic system for recording time-resolved spectra generated in light gas gun experiments. Acquiring a new spectrometer was an option, but it was possible to borrow an existing unit for a period of months, which would be sufficient to evaluate both “off-line” and in-gas gun shots. If it proved adequate for this application, it could be duplicated (with possible modifications); if not, such testing would help determine needed specifications for another model. This report describes the evaluation of the spectrometer (separately andmore » combined with the NSTec LO streak camera) for this purpose. Spectral and temporal resolutions were of primary interest. The first was measured with a monochromatic laser input. The second was ascertained by the combination of the spectrometer’s spatial resolution in the time-dispersive direction and the streak camera’s intrinsic temporal resolution. System responsivity was also important, and this was investigated by measuring the response of the spectrometer/camera system to black body input—the gas gun experiments are expected to be similar to a 3000K black body—as well as measuring the throughput of the spectrometer separately over a range of visible light provided by a monochromator. The flat field (in wavelength) was also measured and the final part of the evaluation was actual fielding on two gas gun shots. No firm specifications for spectral or temporal resolution were defined precisely, but these were desired to be in the 1–2 nm and 1–2 ns ranges, respectively, if possible. As seen below, these values were met or nearly met, depending on wavelength. Other performance parameters were also not given (threshold requirements) but the evaluations performed with laser, black body, and successful gas gun shots taken in aggregate indicate that the spectrometer is adequate for this purpose. Even still, some (relatively minor) opportunities for improvement were noticed and these were documented for incorporation into any near-duplicate spectrometer that might be fabricated in the future.« less
Observations from Juno's Radiation Monitoring Investigation during Juno's Early Orbits
NASA Astrophysics Data System (ADS)
Becker, Heidi N.; Jorgensen, John L.; Adriani, Alberto; Mura, Alessandro; Connerney, John E. P.; Santos-Costa, Daniel; Bolton, Scott J.; Levin, Steven M.; Alexander, James W.; Adumitroaie, Virgil; Manor-Chapman, Emily A.; Daubar, Ingrid J.; Lee, Clifford; Benn, Mathias; Denver, Troelz; Sushkova, Julia; Cicchetti, Andrea; Noschese, Raffaella; Thorne, Richard M.
2017-04-01
Juno's Radiation Monitoring (RM) Investigation profiles Jupiter's >10-MeV electron environment throughout unexplored regions of the Jovian magnetosphere. RM's measurement approach involves active retrieval of the characteristic noise signatures from penetrating radiation in images obtained by Juno's heavily shielded star cameras and science instruments. Collaborative observation campaigns of "radiation image" collection and penetrating particle counts are conducted at targeted opportunities within the magnetosphere during each of Juno's perijove passes using the spacecraft Stellar Reference Unit, the Magnetic Field Investigation's Advanced Stellar Compass Imagers, and the JIRAM infrared imager. Simultaneous observations gathered from these very different instruments provide comparative spectral information due to substantial differences in instrument shielding. Juno's orbit provides a unique sampling of energetic particles within Jupiter's innermost radiation belts and polar regions. We present a survey of observations of the high energy radiation environment made by Juno's SRU and ASC star cameras and the JIRAM infrared imager during Juno's early perijove passes on August 27 and December 11, 2016; and February 2 and March 27, 2017. The JPL author's copyright for this publication is held by the California Institute of Technology. Government Sponsorship acknowledged.
Murine fundus fluorescein angiography: An alternative approach using a handheld camera.
Ehrenberg, Moshe; Ehrenberg, Scott; Schwob, Ouri; Benny, Ofra
2016-07-01
In today's modern pharmacologic approach to treating sight-threatening retinal vascular disorders, there is an increasing demand for a compact, mobile, lightweight and cost-effective fluorescein fundus camera to document the effects of antiangiogenic drugs on laser-induced choroidal neovascularization (CNV) in mice and other experimental animals. We have adapted the use of the Kowa Genesis Df Camera to perform Fundus Fluorescein Angiography (FFA) in mice. The 1 kg, 28 cm high camera has built-in barrier and exciter filters to allow digital FFA recording to a Compact Flash memory card. Furthermore, this handheld unit has a steady Indirect Lens Holder that firmly attaches to the main unit, that securely holds a 90 diopter lens in position, in order to facilitate appropriate focus and stability, for photographing the delicate central murine fundus. This easily portable fundus fluorescein camera can effectively record exceptional central retinal vascular detail in murine laser-induced CNV, while readily allowing the investigator to adjust the camera's position according to the variable head and eye movements that can randomly occur while the mouse is optimally anesthetized. This movable image recording device, with efficiencies of space, time, cost, energy and personnel, has enabled us to accurately document the alterations in the central choroidal and retinal vasculature following induction of CNV, implemented by argon-green laser photocoagulation and disruption of Bruch's Membrane, in the experimental murine model of exudative macular degeneration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Low-cost mobile phone microscopy with a reversed mobile phone camera lens.
Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.
Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens
Fletcher, Daniel A.
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188
Dense depth maps from correspondences derived from perceived motion
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2017-01-01
Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.
Optical Design of the LSST Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olivier, S S; Seppala, L; Gilmore, K
2008-07-16
The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses withmore » clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.« less
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
Detection of pointing errors with CMOS-based camera in intersatellite optical communications
NASA Astrophysics Data System (ADS)
Yu, Si-yuan; Ma, Jing; Tan, Li-ying
2005-01-01
For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.
Hanada, Takashi; Katsuta, Shoichi; Yorozu, Atsunori; Maruyama, Koichi
2009-01-01
When using a HDR remote afterloading brachytherapy unit, results of treatment can be greatly influenced by both source position and treatment time. The purpose of this study is to obtain information on the source of the HDR remote afterloading unit, such as its position and time structure, with the use of a simple system consisting of a plastic scintillator block and a charge‐coupled device (CCD) camera. The CCD camera was used for recording images of scintillation luminescence at a fixed rate of 30 frames per second in real time. The source position and time structure were obtained by analyzing the recorded images. For a preset source‐step‐interval of 5 mm, the measured value of the source position was 5.0±1.0mm, with a pixel resolution of 0.07 mm in the recorded images. For a preset transit time of 30 s, the measured value was 30.0±0.6 s, when the time resolution of the CCD camera was 1/30 s. This system enabled us to obtain the source dwell time and movement time. Therefore, parameters such as I192r source position, transit time, dwell time, and movement time at each dwell position can be determined quantitatively using this plastic scintillator‐CCD camera system. PACS number: 87.53.Jw
Beam line shielding calculations for an Electron Accelerator Mo-99 production facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mocko, Michal
2016-05-03
The purpose of this study is to evaluate the photon and neutron fields in and around the latest beam line design for the Mo-99 production facility. The radiation dose to the beam line components (quadrupoles, dipoles, beam stops and the linear accelerator) are calculated in the present report. The beam line design assumes placement of two cameras: infra red (IR) and optical transition radiation (OTR) for continuous monitoring of the beam spot on target during irradiation. The cameras will be placed off the beam axis offset in vertical direction. We explored typical shielding arrangements for the cameras and report themore » resulting neutron and photon dose fields.« less
A simple demonstration when studying the equivalence principle
NASA Astrophysics Data System (ADS)
Mayer, Valery; Varaksina, Ekaterina
2016-06-01
The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.
Concepts, laboratory, and telescope test results of the plenoptic camera as a wavefront sensor
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, L. F.; Montilla, I.; Fernández-Valdivia, J. J.; Trujillo-Sevilla, J. L.; Rodríguez-Ramos, J. M.
2012-07-01
The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four - dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the photography operator applied to this function, and many other features of the light field can be exploited to extract information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this paper, wavefront sensing. The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor, because the information needed as the starting point for those sensors can be derived from the plenoptic image. Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of the plenoptic camera to behave as a wavefront sensor.
Bater, Christopher W; Coops, Nicholas C; Wulder, Michael A; Hilker, Thomas; Nielsen, Scott E; McDermid, Greg; Stenhouse, Gordon B
2011-09-01
Critical to habitat management is the understanding of not only the location of animal food resources, but also the timing of their availability. Grizzly bear (Ursus arctos) diets, for example, shift seasonally as different vegetation species enter key phenological phases. In this paper, we describe the use of a network of seven ground-based digital camera systems to monitor understorey and overstorey vegetation within species-specific regions of interest. Established across an elevation gradient in western Alberta, Canada, the cameras collected true-colour (RGB) images daily from 13 April 2009 to 27 October 2009. Fourth-order polynomials were fit to an RGB-derived index, which was then compared to field-based observations of phenological phases. Using linear regression to statistically relate the camera and field data, results indicated that 61% (r (2) = 0.61, df = 1, F = 14.3, p = 0.0043) of the variance observed in the field phenological phase data is captured by the cameras for the start of the growing season and 72% (r (2) = 0.72, df = 1, F = 23.09, p = 0.0009) of the variance in length of growing season. Based on the linear regression models, the mean absolute differences in residuals between predicted and observed start of growing season and length of growing season were 4 and 6 days, respectively. This work extends upon previous research by demonstrating that specific understorey and overstorey species can be targeted for phenological monitoring in a forested environment, using readily available digital camera technology and RGB-based vegetation indices.
Estimating tiger abundance from camera trap data: Field surveys and analytical issues
Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas
2011-01-01
Automated photography of tigers Panthera tigris for purely illustrative purposes was pioneered by British forester Fred Champion (1927, 1933) in India in the early part of the Twentieth Century. However, it was McDougal (1977) in Nepal who first used camera traps, equipped with single-lens reflex cameras activated by pressure pads, to identify individual tigers and study their social and predatory behaviors. These attempts involved a small number of expensive, cumbersome camera traps, and were not, in any formal sense, directed at “sampling” tiger populations.
FieldSAFE: Dataset for Obstacle Detection in Agriculture.
Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm
2017-11-09
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
FieldSAFE: Dataset for Obstacle Detection in Agriculture
Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik
2017-01-01
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Oleg P.; Semin, Ilya A.; Potapov, Victor N.
Gamma-ray imaging is the most important way to identify unknown gamma-ray emitting objects in decommissioning, security, overcoming accidents. Over the past two decades a system for producing of gamma images in these conditions became more or less portable devices. But in recent years these systems have become the hand-held devices. This is very important, especially in emergency situations, and measurements for safety reasons. We describe the first integrated hand-held instrument for emergency and security applications. The device is based on the coded aperture image formation, position sensitive gamma-ray (X-ray) detector Medipix2 (detectors produces by X-ray Imaging Europe) and tablet computer.more » The development was aimed at creating a very low weight system with high angular resolution. We present some sample gamma-ray images by camera. Main estimated parameters of the system are the following. The field of view video channel ∼ 490 deg. The field of view gamma channel ∼ 300 deg. The sensitivity of the system with a hexagonal mask for the source of Cs-137 (Eg = 662 keV), is in units of dose D ∼ 100 mR. This option is less then order of magnitude worse than for the heavy, non-hand-held systems (e.g., gamma-camera Cartogam, by Canberra.) The angular resolution of the gamma channel for the sources of Cs-137 (Eg = 662 keV) is about 1.20 deg. (authors)« less
Research on airborne infrared leakage detection of natural gas pipeline
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Xu, Bin; Xu, Xu; Wang, Hongchao; Yu, Dongliang; Tian, Shengjie
2011-12-01
An airborne laser remote sensing technology is proposed to detect natural gas pipeline leakage in helicopter which carrying a detector, and the detector can detect a high spatial resolution of trace of methane on the ground. The principle of the airborne laser remote sensing system is based on tunable diode laser absorption spectroscopy (TDLAS). The system consists of an optical unit containing the laser, camera, helicopter mount, electronic unit with DGPS antenna, a notebook computer and a pilot monitor. And the system is mounted on a helicopter. The principle and the architecture of the airborne laser remote sensing system are presented. Field test experiments are carried out on West-East Natural Gas Pipeline of China, and the results show that airborne detection method is suitable for detecting gas leak of pipeline on plain, desert, hills but unfit for the area with large altitude diversification.
On-ground and in-orbit characterisation plan for the PLATO CCD normal cameras
NASA Astrophysics Data System (ADS)
Gow, J. P. D.; Walton, D.; Smith, A.; Hailey, M.; Curry, P.; Kennedy, T.
2017-11-01
PLAnetary Transits and Ocillations (PLATO) is the third European Space Agency (ESA) medium class mission in ESA's cosmic vision programme due for launch in 2026. PLATO will carry out high precision un-interrupted photometric monitoring in the visible band of large samples of bright solar-type stars. The primary mission goal is to detect and characterise terrestrial exoplanets and their systems with emphasis on planets orbiting in the habitable zone, this will be achieved using light curves to detect planetary transits. PLATO uses a novel multi- instrument concept consisting of 26 small wide field cameras The 26 cameras are made up of a telescope optical unit, four Teledyne e2v CCD270s mounted on a focal plane array and connected to a set of Front End Electronics (FEE) which provide CCD control and readout. There are 2 fast cameras with high read-out cadence (2.5 s) for magnitude ~ 4-8 stars, being developed by the German Aerospace Centre and 24 normal (N) cameras with a cadence of 25 s to monitor stars with a magnitude greater than 8. The N-FEEs are being developed at University College London's Mullard Space Science Laboratory (MSSL) and will be characterised along with the associated CCDs. The CCDs and N-FEEs will undergo rigorous on-ground characterisation and the performance of the CCDs will continue to be monitored in-orbit. This paper discusses the initial development of the experimental arrangement, test procedures and current status of the N-FEE. The parameters explored will include gain, quantum efficiency, pixel response non-uniformity, dark current and Charge Transfer Inefficiency (CTI). The current in-orbit characterisation plan is also discussed which will enable the performance of the CCDs and their associated N-FEE to be monitored during the mission, this will include measurements of CTI giving an indication of the impact of radiation damage in the CCDs.
Automated face detection for occurrence and occupancy estimation in chimpanzees.
Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S
2017-03-01
Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.
Methods for multiple-telescope beam imaging and guiding in the near-infrared
NASA Astrophysics Data System (ADS)
Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.
2018-05-01
Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2006-06-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2004-09-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27'x 27') UB/VRI optimized mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6\\arcmin\\ field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4'x 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 x 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench beam combiner with visible and near-infrared imagers utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC/NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2008-07-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5' × 0.5') imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
NASA Astrophysics Data System (ADS)
Carrasco, E.; Sánchez-Blanco, E.; García-Vargas, M. L.; Gil de Paz, A.; Páez, G.; Gallego, J.; Sánchez, F. M.; Vílchez, J. M.
2012-09-01
MEGARA is the next optical Integral-Field Unit (IFU) and Multi-Object Spectrograph (MOS) for Gran Telescopio Canarias. The instrument offers two IFUs plus a Multi-Object Spectroscopy (MOS) mode: a large compact bundle covering 12.5 arcsec x 11.3 arcsec on sky with 100 μm fiber-core; a small compact bundle, of 8.5 arcsec x 6.7 arcsec with 70 μm fiber-core and a fiber MOS positioner that allows to place up to 100 mini-bundles, 7 fibers each, with 100 μm fiber-core, within a 3.5 arcmin x 3.5 arcmin field of view, around the two IFUs. The fibers, organized in bundles, end in the pseudo-slit plate, which will be placed at the entrance focal plane of the MEGARA spectrograph. The large IFU and MOS modes will provide intermediate to high spectral resolutions, R=6800-17000. The small IFU mode will provide R=8000-20000. All these resolutions are possible thanks to a spectrograph design based in the used of volume phase holographic gratings in combination with prisms to keep fixed the collimator and camera angle. The MEGARA optics is composed by a total of 53 large optical elements per spectrograph: the field lens, the collimator and the camera lenses plus the complete set of pupil elements including holograms, windows and prisms. INAOE, a partner of the GTC and a partner of MEGARA consortium, is responsible of the optics manufacturing and tests. INAOE will carry out this project working in an alliance with CIO. This paper summarizes the status of MEGARA spectrograph optics at the Preliminary Design Review, held on March 2012.
VizieR Online Data Catalog: BzJK observations around radio galaxies (Galametz+, 2009)
NASA Astrophysics Data System (ADS)
Galametz, A.; De Breuck, C.; Vernet, J.; Stern, D.; Rettura, A.; Marmo, C.; Omont, A.; Allen, M.; Seymour, N.
2010-02-01
We imaged the two targets using the Bessel B-band filter of the Large Format Camera (LFC) on the Palomar 5m Hale Telescope. We imaged the radio galaxy fields using the z-band filter of Palomar/LFC. In February 2005, we observed 7C 1751+6809 for 60-min under photometric conditions. In August 2005, we observed 7C 1756+6520 for 135-min but in non-photometric conditions. The tables provide the B, z, J and Ks magnitudes and coordinates of the pBzK* galaxies (red passively evolving candidates selected by BzK=(z-K)-(B-z)<-0.2 and (z-K)>2.2) for both fields. The B and z bands were obtained using the Large Format Camera (LFC) on the Palomar 5m Hale Telescope, and the J and Ks bands using Wide-field Infrared Camera (WIRCAM) of the Canada-France-Hawaii Telescope (CFHT). (2 data files).
The Last Meter: Blind Visual Guidance to a Target.
Manduchi, Roberto; Coughlan, James M
2014-01-01
Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.
Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe
2013-01-24
The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.
Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe
2013-01-01
The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed. PMID:23348037
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Optical Indoor Positioning System Based on TFT Technology.
Gőzse, István
2015-12-24
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low.
On-Tree Mango Fruit Size Estimation Using RGB-D Images
Wang, Zhenglin; Verma, Brijesh
2017-01-01
In-field mango fruit sizing is useful for estimation of fruit maturation and size distribution, informing the decision to harvest, harvest resourcing (e.g., tray insert sizes), and marketing. In-field machine vision imaging has been used for fruit count, but assessment of fruit size from images also requires estimation of camera-to-fruit distance. Low cost examples of three technologies for assessment of camera to fruit distance were assessed: a RGB-D (depth) camera, a stereo vision camera and a Time of Flight (ToF) laser rangefinder. The RGB-D camera was recommended on cost and performance, although it functioned poorly in direct sunlight. The RGB-D camera was calibrated, and depth information matched to the RGB image. To detect fruit, a cascade detection with histogram of oriented gradients (HOG) feature was used, then Otsu’s method, followed by color thresholding was applied in the CIE L*a*b* color space to remove background objects (leaves, branches etc.). A one-dimensional (1D) filter was developed to remove the fruit pedicles, and an ellipse fitting method employed to identify well-separated fruit. Finally, fruit lineal dimensions were calculated using the RGB-D depth information, fruit image size and the thin lens formula. A Root Mean Square Error (RMSE) = 4.9 and 4.3 mm was achieved for estimated fruit length and width, respectively, relative to manual measurement, for which repeated human measures were characterized by a standard deviation of 1.2 mm. In conclusion, the RGB-D method for rapid in-field mango fruit size estimation is practical in terms of cost and ease of use, but cannot be used in direct intense sunshine. We believe this work represents the first practical implementation of machine vision fruit sizing in field, with practicality gauged in terms of cost and simplicity of operation. PMID:29182534
ChemCam Mast Unit Being Prepared for Laser Firing
2010-12-23
Researchers prepare for a test of the Chemistry and Camera ChemCam instrument that will fly on NASA Mars Science Laboratory mission; researchers are preparing the instrument mast unit for a laser firing test.
In-flight Video Captured by External Tank Camera System
NASA Technical Reports Server (NTRS)
2005-01-01
In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.
Uncooled radiometric camera performance
NASA Astrophysics Data System (ADS)
Meyer, Bill; Hoelter, T.
1998-07-01
Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.
Inventory of terrestrial mammals in the Rincon Mountains using camera traps
Don E. Swann; Nic Perkins
2013-01-01
The Sky Island region of the southwestern United States and northwestern Mexico is well-known for its diversity of mammals, including endemic species and species representing several different biogeographic provinces. Camera trap studies have provided important insight into mammalian distribution and diversity in the Sky Islands in recent years, but few studies have...
Rugged Video System For Inspecting Animal Burrows
NASA Technical Reports Server (NTRS)
Triandafils, Dick; Maples, Art; Breininger, Dave
1992-01-01
Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Video model deformation system for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
HERCULES/MSI: a multispectral imager with geolocation for STS-70
NASA Astrophysics Data System (ADS)
Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta
1995-11-01
A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.
VizieR Online Data Catalog: >20yrs of HST obs. of Cepheids in SNIa host gal. (Hoffmann+, 2016)
NASA Astrophysics Data System (ADS)
Hoffmann, S. L.; Macri, L. M.; Riess, A. G.; Yuan, W.; Casertano, S.; Foley, R. J.; Filippenko, A. V.; Tucker, B. E.; Chornock, R.; Silverman, J. M.; Welch, D. L.; Goobar, A.; Amanullah, R.
2017-01-01
HST observations of Cepheid variables (both archival or newly obtained) span more than two decades (1994-2016; see table 1). The earliest Cepheid observations we analyzed were obtained with the Wide Field and Planetary Camera 2 (WFPC2) as part of the initial efforts to measure H0 with HST (Freedman+ 2001ApJ...553...47F; Sandage+ 2006ApJ...653..843S) and were later used by Freedman+ (2012ApJ...758...24F) to reach beyond the LMC for the Carnegie Hubble Project. We also re-analyzed observations obtained in previous phases of our project (Riess+ 2009, J/ApJS/183/109; 2011, J/ApJ/730/119) with the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) and/or the Wide Field Camera 3 (WFC3) Ultraviolet and Visible Channel (UVIS). Finally, we obtained new observations of nine SN Ia hosts using WFC3. We obtained the majority of our optical images with these modern cameras, 113 and 132 unique epochs with ACS and WFC3, respectively, while WFPC2 contributes a smaller fraction with 67 epochs. (6 data files).
Space infrared telescope facility wide field and diffraction limited array camera (IRAC)
NASA Technical Reports Server (NTRS)
Fazio, Giovanni G.
1988-01-01
The wide-field and diffraction limited array camera (IRAC) is capable of two-dimensional photometry in either a wide-field or diffraction-limited mode over the wavelength range from 2 to 30 microns with a possible extension to 120 microns. A low-doped indium antimonide detector was developed for 1.8 to 5.0 microns, detectors were tested and optimized for the entire 1.8 to 30 micron range, beamsplitters were developed and tested for the 1.8 to 30 micron range, and tradeoff studies of the camera's optical system performed. Data are presented on the performance of InSb, Si:In, Si:Ga, and Si:Sb array detectors bumpbonded to a multiplexed CMOS readout chip of the source-follower type at SIRTF operating backgrounds (equal to or less than 1 x 10 to the 8th ph/sq cm/sec) and temperature (4 to 12 K). Some results at higher temperatures are also presented for comparison to SIRTF temperature results. Data are also presented on the performance of IRAC beamsplitters at room temperature at both 0 and 45 deg angle of incidence and on the performance of the all-reflecting optical system baselined for the camera.
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Using Wide-Field Meteor Cameras to Actively Engage Students in Science
NASA Astrophysics Data System (ADS)
Kuehn, D. M.; Scales, J. N.
2012-08-01
Astronomy has always afforded teachers an excellent topic to develop students' interest in science. New technology allows the opportunity to inexpensively outfit local school districts with sensitive, wide-field video cameras that can detect and track brighter meteors and other objects. While the data-collection and analysis process can be mostly automated by software, there is substantial human involvement that is necessary in the rejection of spurious detections, in performing dynamics and orbital calculations, and the rare recovery and analysis of fallen meteorites. The continuous monitoring allowed by dedicated wide-field surveillance cameras can provide students with a better understanding of the behavior of the night sky including meteors and meteor showers, stellar motion, the motion of the Sun, Moon, and planets, phases of the Moon, meteorological phenomena, etc. Additionally, some students intrigued by the possibility of UFOs and "alien visitors" may find that actual monitoring data can help them develop methods for identifying "unknown" objects. We currently have two ultra-low light-level surveillance cameras coupled to fish-eye lenses that are actively obtaining data. We have developed curricula suitable for middle or high school students in astronomy and earth science courses and are in the process of testing and revising our materials.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Analysis of Camera Arrays Applicable to the Internet of Things.
Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing
2016-03-22
The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.
Real-time vehicle matching for multi-camera tunnel surveillance
NASA Astrophysics Data System (ADS)
Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried
2011-03-01
Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.
NASA Astrophysics Data System (ADS)
Griffiths, Andrew; Coates, Andrew; Muller, Jan-Peter; Jaumann, Ralf; Josset, Jean-Luc; Paar, Gerhard; Barnes, David
2010-05-01
The ExoMars mission has evolved into a joint European-US mission to deliver a trace gas orbiter and a pair of rovers to Mars in 2016 and 2018 respectively. The European rover will carry the Pasteur exobiology payload including the 1.56 kg Panoramic Camera. PanCam will provide multispectral stereo images with 34 deg horizontal field-of-view (580 microrad/pixel) Wide-Angle Cameras (WAC) and (83 microrad/pixel) colour monoscopic "zoom" images with 5 deg horizontal field-of-view High Resolution Camera (HRC). The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage [1]. Integrated with the WACs and HRC into the PanCam optical bench (which helps the instrument meet its planetary protection requirements) is the PanCam interface unit (PIU); which provides image storage, a Spacewire interface to the rover and DC-DC power conversion. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission [2] as well as providing multispectral geological imaging, colour and stereo panoramic images and solar images for water vapour abundance and dust optical depth measurements. The High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls. Additionally HRC will be used to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. In short, PanCam provides the overview and context for the ExoMars experiment locations, required to enable the exobiology aims of the mission. In addition to these baseline capabilities further enhancements are possible to PanCam to enhance it's effectiveness for astrobiology and planetary exploration: 1. Rover Inspection Mirror (RIM) 2. Organics Detection by Fluorescence Excitation (ODFE) LEDs [3-6] 3. UVIS broadband UV Flux and Opacity Determination (UVFOD) photodiode This paper will discuss the scientific objectives and resource impacts of these enhancements. References: 1. Griffiths, A.D., Coates, A.J., Josset, J.-L., Paar, G., Hofmann, B., Pullan, D., Ruffer, P., Sims, M.R., Pillinger, C.T., The Beagle 2 stereo camera system, Planet. Space Sci. 53, 1466-1488, 2005. 2. Paar, G., Oberst, J., Barnes, D.P., Griffiths, A.D., Jaumann, R., Coates, A.J., Muller, J.P., Gao, Y., Li, R., 2007, Requirements and Solutions for ExoMars Rover Panoramic Camera 3d Vision Processing, abstract submitted to EGU meeting, Vienna, 2007. 3. Storrie-Lombardi, M.C., Hug, W.F., McDonald, G.D., Tsapin, A.I., and Nealson, K.H. 2001. Hollow cathode ion lasers for deep ultraviolet Raman spectroscopy and fluorescence imaging. Rev. Sci. Ins., 72 (12), 4452-4459. 4. Nealson, K.H., Tsapin, A., and Storrie-Lombardi, M. 2002. Searching for life in the universe: unconventional methods for an unconventional problem. International Microbiology, 5, 223-230. 5. Mormile, M.R. and Storrie-Lombardi, M.C. 2005. The use of ultraviolet excitation of native fluorescence for identifying biomarkers in halite crystals. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 246-253. 6. Storrie-Lombardi, M.C. 2005. Post-Bayesian strategies to optimize astrobiology instrument suites: lessons from Antarctica and the Pilbara. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 288-301.
Application of selected methods of remote sensing for detecting carbonaceous water pollution
NASA Technical Reports Server (NTRS)
Davis, E. M.; Fosbury, W. J.
1973-01-01
A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.
Spectral imaging spreads into new industrial and on-field applications
NASA Astrophysics Data System (ADS)
Bouyé, Clémentine; Robin, Thierry; d'Humières, Benoît
2018-02-01
Numerous recent innovative developments have led to a high reduction of hyperspectral and multispectral cameras cost and size. The achieved products - compact, reliable, low-cot, easy-to-use - meet end-user requirements in major fields: agriculture, food and beverages, pharmaceutics, machine vision, health. The booming of this technology in industrial and on-field applications is getting closer. Indeed, the Spectral Imaging market is at a turning point. A high growth rate of 20% is expected in the next 5 years. The number of cameras sold will increase from 3 600 in 2017 to more than 9 000 in 2022.
UKIRT's Wide Field Camera and the Detection of 10 MJupiter Objects
NASA Astrophysics Data System (ADS)
WFCAM Team; UKIDSS Team
2004-06-01
In mid-2004 a near-infrared wide field camera will be commissioned on UKIRT. About 40% of all UKIRT time will go into sky surveys and one of these, the Large Area Survey using YJHK filters, will extend the field brown dwarf population to temperatures and masses significantly lower than those of the T dwarf population discovered by the Sloan and 2MASS surveys. The LAS should find objects as cool as 450 K and as low mass as 10 MJupiter at 10 pc. These planetary-mass objects will possibly require a new spectral type designation.
NASA Technical Reports Server (NTRS)
Nabors, Sammy
2015-01-01
NASA offers companies an optical system that provides a unique panoramic perspective with a single camera. NASA's Marshall Space Flight Center has developed a technology that combines a panoramic refracting optic (PRO) lens with a unique detection system to acquire a true 360-degree field of view. Although current imaging systems can acquire panoramic images, they must use up to five cameras to obtain the full field of view. MSFC's technology obtains its panoramic images from one vantage point.
Low power multi-camera system and algorithms for automated threat detection
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin
2013-05-01
A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.
Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope
NASA Technical Reports Server (NTRS)
Zissa, D. E.
1984-01-01
Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.
VizieR Online Data Catalog: PHAT X. UV-IR photometry of M31 stars (Williams+, 2014)
NASA Astrophysics Data System (ADS)
Williams, B. F.; Lang, D.; Dalcanton, J. J.; Dolphin, A. E.; Weisz, D. R.; Bell, E. F.; Bianchi, L.; Byler, N.; Gilbert, K. M.; Girardi, L.; Gordon, K.; Gregersen, D.; Johnson, L. C.; Kalirai, J.; Lauer, T. R.; Monachesi, A.; Rosenfield, P.; Seth, A.; Skillman, E.
2015-01-01
The data for the Panchromatic Hubble Andromeda Treasury (PHAT) survey were obtained from 2010 July 12 to 2013 October 12 using the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC), the Wide Field Camera 3 (WFC3) IR (infrared) channel, and the WFC3 UVIS (ultraviolet-optical) channel. The observing strategy is described in detail in Dalcanton et al. (2012ApJS..200...18D). A list of the target names, observing dates, coordinates, orientations, instruments, exposure times, and filters is given in Table 1. Using the ACS and WFC3 cameras aboard HST, we have photometered 414 contiguous WFC3/IR footprints covering 0.5deg2 of the M31 star-forming disk. (4 data files).
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Jarc, Anthony M; Curet, Myriam J
2017-03-01
Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope
NASA Astrophysics Data System (ADS)
Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.
Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.
NASA Astrophysics Data System (ADS)
Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime
2017-07-01
An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.
NASA Astrophysics Data System (ADS)
Payne, L.; Haas, J. P.; Linard, D.; White, L.
1997-12-01
The Laboratory for Astronomy and Solar Physics at Goddard Space Flight Center uses a variety imaging sensors for its instrumentation programs. This paper describes the detector system for SERTS. The SERTS rocket telescope uses an open faceplate, single plate MCP tube as the primary detector for EUV spectra from the Sun. The optical output of this detector is fiber-optically coupled to a cooled, large format CCD. This CCD is operated using a software controlled Camera controller based upon a design used for the SOHO/CDS mission. This camera is a general purpose design, with a topology that supports multiple types of imaging devices. Multiport devices (up to 4 ports) and multiphase clocks are supportable as well as variable speed operation. Clock speeds from 100KHz to 1MHz have been used, and the topology is currently being extended to support 10MHz operation. The form factor for the camera system is based on the popular VME buss. Because the tube is an open faceplate design, the detector system has an assortment of vacuum doors and plumbing to allow operation in vacuum but provide for safe storage at normal atmosphere. Vac-ion pumps (3) are used to maintain working vacuum at all times. Marshall Space Flight Center provided the SERTS programs with HVPS units for both the vac-ion pumps and the MCP tube. The MCP tube HVPS is a direct derivative of the design used for the SXI mission for NOAA. Auxiliary equipment includes a frame buffer that works either as a multi-frame storage unit or as a photon counting accumulation unit. This unit also performs interface buffering so that the camera may appear as a piece of GPIB instrumentation.
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
Surveillance of a 2D Plane Area with 3D Deployed Cameras
Fu, Yi-Ge; Zhou, Jie; Deng, Lei
2014-01-01
As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
Fuzzy logic control for camera tracking system
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant
1992-01-01
A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.
SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance
NASA Technical Reports Server (NTRS)
White, Joseph; Dutta, Soumyo; Striepe, Scott
2015-01-01
The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.
ORAC-DR: One Pipeline for Multiple Telescopes
NASA Astrophysics Data System (ADS)
Cavanagh, B.; Hirst, P.; Jenness, T.; Economou, F.; Currie, M. J.; Todd, S.; Ryder, S. D.
ORAC-DR, a flexible and extensible data reduction pipeline, has been successfully used for real-time data reduction from UFTI and IRCAM (infrared cameras), CGS4 (near-infrared spectrometer), Michelle (mid-infrared imager and echelle spectrometer), at UKIRT; and SCUBA (sub-millimeter bolometer array) at JCMT. We have now added the infrared imaging spectrometers IRIS2 at the Anglo-Australian Telescope and UIST at UKIRT to the list of officially supported instruments. We also present initial integral field unit support for UIST, along with unofficial support for the imager and multi-object spectrograph GMOS at Gemini. This paper briefly describes features of the pipeline along with details of adopting ORAC-DR for other instruments on telescopes around the world.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
NASA Astrophysics Data System (ADS)
Bennett, K. E.; Cherry, J. E.; Hiemstra, C. A.; Bolton, W. R.
2013-12-01
Interior sub-Arctic Alaskan snow cover is rapidly changing and requires further study for correct parameterization in physically based models. This project undertook field studies during the 2013 snow melt season to capture snow depth, snow temperature profiles, and snow cover extent to compare with observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor at four different sites underlain by discontinuous permafrost. The 2013 melt season, which turned out to be the latest snow melt period on record, was monitored using manual field measurements (SWE, snow depth data collection), iButtons to record temperature of the snow pack, GoPro cameras to capture time lapse of the snow melt, and low level orthoimagery collected at ~1500 m using a Navion L17a plane mounted with a Nikon D3s camera. Sites were selected across a range of landscape conditions, including a north facing black spruce hill slope, a south facing birch forest, an open tundra site, and a high alpine meadow. Initial results from the adjacent north and south facing sites indicate a highly sensitive system where snow cover melts over just a few days, illustrating the importance of high resolution temporal data capture at these locations. Field observations, iButtons and GoPro cameras show that the MODIS data captures the melt conditions at the south and the north site with accuracy (2.5% and 6.5% snow cover fraction present on date of melt, respectively), but MODIS data for the north site is less variable around the melt period, owing to open conditions and sparse tree cover. However, due to the rapid melt rate trajectory, shifting the melt date estimate by a day results in a doubling of the snow cover fraction estimate observed by MODIS. This information can assist in approximating uncertainty associated with remote sensing data that is being used to populate hydrologic and snow models (the Sacramento Soil Moisture Accounting model, coupled with SNOW-17, and the Variable Infiltration Capacity hydrologic model) and provide greater understanding of error and resultant model sensitivities associated with regional observations of snow cover across the sub-Arctic boreal landscape.
NASA Astrophysics Data System (ADS)
Ono, Yoshiaki; Ouchi, Masami; Shimasaku, Kazuhiro; Dunlop, James; Farrah, Duncan; McLure, Ross; Okamura, Sadanori
2010-12-01
We investigate the stellar populations of Lyα emitters (LAEs) at z = 5.7 and 6.6 in a 0.65 deg2 sky of the Subaru/XMM-Newton Deep Survey (SXDS) Field, using deep images taken with the Subaru/Suprime-Cam, United Kingdom Infrared Telescope/Wide Field Infrared Camera, and Spitzer/Infrared Array Camera (IRAC). We produce stacked multiband images at each redshift from 165 (z = 5.7) and 91 (z = 6.6) IRAC-undetected objects to derive typical spectral energy distributions (SEDs) of z ~ 6-7 LAEs for the first time. The stacked LAEs have as blue UV continua as the Hubble Space Telescope (HST)/Wide Field Camera 3 (WFC3) z-dropout galaxies of similar M UV, with a spectral slope β ~ -3, but at the same time they have red UV-to-optical colors with detection in the 3.6 μm band. Using SED fitting we find that the stacked LAEs have low stellar masses of ~(3-10) × 107 M sun, very young ages of ~1-3 Myr, negligible dust extinction, and strong nebular emission from the ionized interstellar medium, although the z = 6.6 object is fitted similarly well with high-mass models without nebular emission; inclusion of nebular emission reproduces the red UV-to-optical colors while keeping the UV colors sufficiently blue. We infer that typical LAEs at z ~ 6-7 are building blocks of galaxies seen at lower redshifts. We find a tentative decrease in the Lyα escape fraction from z = 5.7 to 6.6, which may imply an increase in the intergalactic medium neutral fraction. From the minimum contribution of nebular emission required to fit the observed SEDs, we place an upper limit on the escape fraction of ionizing photons of f ion esc ~ 0.6 at z = 5.7 and ~0.9 at z = 6.6. We also compare the stellar populations of our LAEs with those of stacked HST/WFC3 z-dropout galaxies. Based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
3-D Velocimetry of Strombolian Explosions
NASA Astrophysics Data System (ADS)
Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.
2014-12-01
Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.
United States Naval Academy Polar Science Program's Visual Arctic Observing Buoys; The IceGoat
NASA Astrophysics Data System (ADS)
Woods, J. E.; Clemente-Colon, P.; Nghiem, S. V.; Rigor, I.; Valentic, T. A.
2012-12-01
The U.S. Naval Academy Oceanography Department currently has a curriculum based Polar Science Program (USNA PSP). Within the PSP there is an Arctic Buoy Program (ABP) student research component that will include the design, build, testing and deployment of Arctic Buoys. Establishing an active, field-research program in Polar Science will greatly enhance Midshipman education and research, as well as introduce future Naval Officers to the Arctic environment. The Oceanography Department has engaged the USNA Ocean Engineering, Systems Engineering, Aerospace Engineering, and Computer Science Departments and developed a USNA Visual Arctic Observing Buoy, IceGoat1, which was designed, built, and deployed by midshipmen. The experience gained through Polar field studies and data derived from these buoys will be used to enhance course materials and laboratories and will also be used directly in Midshipman independent research projects. The USNA PSP successfully deployed IceGoat1 during the BROMEX 2012 field campaign out of Barrow, AK in March 2012. This buoy reports near real-time observation of Air Temperature, Sea Temperature, Atmospheric Pressure, Position and Images from 2 mounted webcams. The importance of this unique type of buoy being inserted into the U.S. Interagency Arctic Buoy Program and the International Arctic Buoy Programme (USIABP/IABP) array is cross validating satellite observations of sea ice cover in the Arctic with the buoys webcams. We also propose to develop multiple sensor packages for the IceGoat to include a more robust weather suite, and a passive acoustic hydrophone. Remote cameras on buoys have provided crucial qualitative information that complements the quantitative measurements of geophysical parameters. For example, the mechanical anemometers on the IABP Polar Arctic Weather Station at the North Pole Environmental Observatory (NPEO) have at times reported zero winds speeds, and inspection of the images from the NPEO cameras have showed frosting on the camera during these same periods indicating that the anemometer has temporarily frozen up. Later when the camera lens clears, the anemometers resume providing reasonable wind speeds. The cameras have also provided confirmation of the onset of melt and freeze, and indications of cloudy and clear skies. USNA PSP will monitor meteorological and oceanographic parameters of the Arctic environment remotely via its own buoys. Web cameras will provide near real time visual observations of the buoys current positions, allowing for instant validation of other remotes sensors and modeled data. Each buoy will be developed with at a minimum a meteorological sensor package in accordance with IABP protocol (2m Air Temp, SLP). Platforms will also be developed with new sensor packages to possibly include, wind speed, ice temperature, sea ice thickness, underwater acoustics, and new communications suites (Iridium, Radio). The uniqueness of the IceGoat is that it is based on the new AXIB buoy designed by LBI, Inc. that has a proven record of being able to survive in the harsh marginal ice zone environment. IceGoat1 will be deployed in the High Arctic during the USCGC HEALY cruise in late August 2012.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
Light field reconstruction robust to signal dependent noise
NASA Astrophysics Data System (ADS)
Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai
2014-11-01
Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.
A New Technique for Precision Photometry Using Alt/Az Telescopes
NASA Astrophysics Data System (ADS)
Kirkaptrick, Colin; Stacey, Piper; Swift, Jonathan
2018-06-01
We present and test a new method for flat field calibration of images obtained on telescopes with altitude-azimuth (Alt-Az) mounts. Telescopes using Alt-Az mounts typically employ a field “de-rotator” to account for changing parallactic angles of targets observed across the sky, or for long exposures of a single target. This “de-rotation” results in a changing orientation of the telescope optics with respect to the camera. This, in turn, can result in a flat field that is a function of camera orientation due to, for example, vignetting. In order to account for these changes we develop and test a new flat field technique using the observations of known transiting exoplanets.
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for explaining variation of time series of satellite based indices, and it will also benefit models describing ecosystem functioning at species or plant functional type level. With the contribution of the LIFE+ financial instrument of the European Union (LIFE12 ENV/FI/000409 Monimet, http://monimet.fmi.fi)
NASA Astrophysics Data System (ADS)
Jarvis, S.; Hargrave, G. K.
2006-01-01
Experimental data obtained using a new multiple-camera digital particle image velocimetry (PIV) technique are presented for the interaction between a propagating flame and the turbulent recirculating velocity field generated during flame-solid obstacle interaction. The interaction between the gas movement and the obstacle creates turbulence by vortex shedding and local wake recirculations. The presence of turbulence in a flammable gas mixture can wrinkle a flame front, increasing the flame surface area and enhancing the burning rate. To investigate propagating flame/turbulence interaction, a novel multiple-camera digital PIV technique was used to provide high spatial and temporal characterization of the phenomenon for the turbulent flow field in the wake of three sequential obstacles. The technique allowed the quantification of the local flame speed and local flow velocity. Due to the accelerating nature of the explosion flow field, the wake flows develop 'transient' turbulent fields. Multiple-camera PIV provides data to define the spatial and temporal variation of both the velocity field ahead of the propagating flame and the flame front to aid the understanding of flame-vortex interaction. Experimentally obtained values for flame displacement speed and flame stretch are presented for increasing vortex complexity.
Marias Pass, Contact Zone of Two Martian Rock Units
2015-12-17
This view from the Mast Camera (Mastcam) in NASA's Curiosity Mars rover shows the "Marias Pass" area where a lower and older geological unit of mudstone -- the pale zone in the center of the image -- lies in contact with an overlying geological unit of sandstone. Just before Curiosity reached Marias Pass, the rover's laser-firing Chemistry and Camera (ChemCam) instrument examined a rock found to be rich in silica, a mineral-forming chemical. This scene combines several images taken on May 22, 2015, during the 992nd Martian day, or sol, of Curiosity's work on Mars. The scene is presented with a color adjustment that approximates white balancing, to resemble how the rocks and sand would appear under daytime lighting conditions on Earth. http://photojournal.jpl.nasa.gov/catalog/?IDNumber=pia20174
Non-uniform refractive index field measurement based on light field imaging technique
NASA Astrophysics Data System (ADS)
Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong
2018-02-01
In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.
2002-07-10
KENNEDY SPACE CENTER, FLA. -- Scott Minnick, with United Space Alliance, places a fiber-optic camera inside the flow line on Endeavour. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.
2002-07-10
KENNEDY SPACE CENTER, FLA. -- Scott Minnick, with United Space Alliance, places a fiber-optic camera inside the flow line on Endeavour. Minnick wears a special viewing apparatus that sees where the camera is going. The inspection is the result of small cracks being discovered on the LH2 Main Propulsion System (MPS) flow liners in other orbiters. Endeavour is next scheduled to fly on mission STS-113.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Advances in Heavy Ion Beam Probe Technology and Operation on MST
NASA Astrophysics Data System (ADS)
Demers, D. R.; Connor, K. A.; Schoch, P. M.; Radke, R. J.; Anderson, J. K.; Craig, D.; den Hartog, D. J.
2003-10-01
A technique to map the magnetic field of a plasma via spectral imaging is being developed with the Heavy Ion Beam Probe on the Madison Symmetric Torus. The technique will utilize two-dimensional images of the ion beam in the plasma, acquired by two CCD cameras, to generate a three-dimensional reconstruction of the beam trajectory. This trajectory, and the known beam ion mass, energy and charge-state, will be used to determine the magnetic field of the plasma. A suitable emission line has not yet been observed since radiation from the MST plasma is both broadband and intense. An effort to raise the emission intensity from the ion beam by increasing beam focus and current has been undertaken. Simulations of the accelerator ion optics and beam characteristics led to a technique, confirmed by experiment, that achieves a narrower beam and marked increase in ion current near the plasma surface. The improvements arising from these simulations will be discussed. Realization of the magnetic field mapping technique is contingent upon accurate reconstruction of the beam trajectory from the camera images. Simulations of two camera CCD images, including the interior of MST, its various landmarks and beam trajectories have been developed. These simulations accept user input such as camera locations, resolution via pixellization and noise. The quality of the images simulated with these and other variables will help guide the selection of viewing port pairs, image size and camera specifications. The results of these simulations will be presented.
Image-based dynamic deformation monitoring of civil engineering structures from long ranges
NASA Astrophysics Data System (ADS)
Ehrhart, Matthias; Lienhart, Werner
2015-02-01
In this paper, we report on the vibration and displacement monitoring of civil engineering structures using a state of the art image assisted total station (IATS) and passive target markings. By utilizing the telescope camera of the total station, it is possible to capture video streams in real time with 10fps and an angular resolution of approximately 2″/px. Due to the high angular resolution resulting from the 30x optical magnification of the telescope, large distances to the object to be monitored are possible. The laser distance measurement unit integrated in the total station allows to precisely set the camera's focus position and to relate the angular quantities gained from image processing to units of length. To accurately measure the vibrations and displacements of civil engineering structures, we use circular target markings rigidly attached to the object. The computation of the targets' centers is performed by a least squares adjustment of an ellipse according to the Gauß-Helmert model from which the parameters of the ellipse and their standard deviations are derived. In laboratory experiments, we show that movements can be detected with an accuracy of better than 0.2mm for single frames and distances up to 30m. For static applications, where many video frames can be averaged, accuracies of better than 0.05mm are possible. In a field test on a life-size footbridge, we compare the vibrations measured by the IATS to reference values derived from accelerometer measurements.
Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
NASA Astrophysics Data System (ADS)
Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Takeuchi, K.; Okochi, H.; Ogata, H.; Kuroshima, H.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Adachi, S.; Uchiyama, T.; Suzuki, H.
2014-11-01
After the nuclear disaster in Fukushima, radiation decontamination has become particularly urgent. To help identify radiation hotspots and ensure effective decontamination operation, we have developed a novel Compton camera based on Ce-doped Gd3Al2Ga3O12 scintillators and multi-pixel photon counter (MPPC) arrays. Even though its sensitivity is several times better than that of other cameras being tested in Fukushima, we introduce a depth-of-interaction (DOI) method to further improve the angular resolution. For gamma rays, the DOI information, in addition to 2-D position, is obtained by measuring the pulse-height ratio of the MPPC arrays coupled to ends of the scintillator. We present the detailed performance and results of various field tests conducted in Fukushima with the prototype 2-D and DOI Compton cameras. Moreover, we demonstrate stereo measurement of gamma rays that enables measurement of not only direction but also approximate distance to radioactive hotspots.
A poloidal section neutron camera for MAST upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sangaroon, S.; Weiszflog, M.; Cecconello, M.
2014-08-21
The Mega Ampere Spherical Tokamak Upgrade (MAST Upgrade) is intended as a demonstration of the physics viability of the Spherical Tokamak (ST) concept and as a platform for contributing to ITER/DEMO physics. Concerning physics exploitation, MAST Upgrade plasma scenarios can contribute to the ITER Tokamak physics particularly in the field of fast particle behavior and current drive studies. At present, MAST is equipped with a prototype neutron camera (NC). On the basis of the experience and results from previous experimental campaigns using the NC, the conceptual design of a neutron camera upgrade (NC Upgrade) is being developed. As part ofmore » the MAST Upgrade, the NC Upgrade is considered a high priority diagnostic since it would allow studies in the field of fast ions and current drive with good temporal and spatial resolution. In this paper, we explore an optional design with the camera array viewing the poloidal section of the plasma from different directions.« less
Standoff aircraft IR characterization with ABB dual-band hyper spectral imager
NASA Astrophysics Data System (ADS)
Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc
2012-09-01
Remote sensing infrared characterization of rapidly evolving events generally involves the combination of a spectro-radiometer and infrared camera(s) as separated instruments. Time synchronization, spatial coregistration, consistent radiometric calibration and managing several systems are important challenges to overcome; they complicate the target infrared characterization data processing and increase the sources of errors affecting the final radiometric accuracy. MR-i is a dual-band Hyperspectal imaging spectro-radiometer, that combines two 256 x 256 pixels infrared cameras and an infrared spectro-radiometer into one single instrument. This field instrument generates spectral datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving events. The design is modular. The spectrometer has two output ports configured with two simultaneously operated cameras to either widen the spectral coverage or to increase the dynamic range of the measured amplitudes. Various telescope options are available for the input port. Recent platform developments and field trial measurements performances will be presented for a system configuration dedicated to the characterization of airborne targets.
Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R
2016-10-06
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.
Pirie, Chris G; Pizzirani, Stefano
2011-12-01
To describe a digital single lens reflex (dSLR) camera adaptor for posterior segment photography. A total of 30 normal canine and feline animals were imaged using a dSLR adaptor which mounts between a dSLR camera body and lens. Posterior segment viewing and imaging was performed with the aid of an indirect lens ranging from 28-90D. Coaxial illumination for viewing was provided by a single white light emitting diode (LED) within the adaptor, while illumination during exposure was provided by the pop-up flash or an accessory flash. Corneal and/or lens reflections were reduced using a pair of linear polarizers, having their azimuths perpendicular to one another. Quality high-resolution, reflection-free, digital images of the retina were obtained. Subjective image evaluation demonstrated the same amount of detail, as compared to a conventional fundus camera. A wide range of magnification(s) [1.2-4X] and/or field(s) of view [31-95 degrees, horizontal] were obtained by altering the indirect lens utilized. The described adaptor may provide an alternative to existing fundus camera systems. Quality images were obtained and the adapter proved to be versatile, portable and of low cost.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Web Camera Use of Mothers and Fathers When Viewing Their Hospitalized Neonate.
Rhoads, Sarah J; Green, Angela; Gauss, C Heath; Mitchell, Anita; Pate, Barbara
2015-12-01
Mothers and fathers of neonates hospitalized in a neonatal intensive care unit (NICU) differ in their experiences related to NICU visitation. To describe the frequency and length of maternal and paternal viewing of their hospitalized neonates via a Web camera. A total of 219 mothers and 101 fathers used the Web camera that allows 24/7 NICU viewing from September 1, 2010, to December 31, 2012, which included 40 mother and father dyads. We conducted a review of the Web camera's Web site log-on records in this nonexperimental, descriptive study. Mothers and fathers had a significant difference in the mean number of log-ons to the Web camera system (P = .0293). Fathers virtually visited the NICU less often than mothers, but there was not a statistical difference between mothers and fathers in terms of the mean total number of minutes viewing the neonate (P = .0834) or in the maximum number of minutes of viewing in 1 session (P = .6924). Patterns of visitations over time were not measured. Web camera technology could be a potential intervention to aid fathers in visiting their neonates. Both parents should be offered virtual visits using the Web camera and oriented regarding how to use the Web camera. These findings are important to consider when installing Web cameras in a NICU. Future research should continue to explore Web camera use in NICUs.
Broadband Achromatic Telecentric Lens
NASA Technical Reports Server (NTRS)
Mouroulis, Pantazis
2007-01-01
A new type of lens design features broadband achromatic performance as well as telecentricity, using a minimum number of spherical elements. With appropriate modifications, the lens design form can be tailored to cover the range of response of the focal-plane array, from Si (400-1,000 nm) to InGaAs (400-1,700 or 2,100 nm) or InSb/HgCdTe reaching to 2,500 nm. For reference, lenses typically are achromatized over the visible wavelength range of 480-650 nm. In remote sensing applications, there is a need for broadband achromatic telescopes, normally satisfied with mirror-based systems. However, mirror systems are not always feasible due to size or geometry restrictions. They also require expensive aspheric surfaces. Non-obscured mirror systems can be difficult to align and have a limited (essentially one-dimensional) field of view. Centrally obscured types have a two-dimensional but very limited field in addition to the obscuration. Telecentricity is a highly desirable property for matching typical spectrometer types, as well as for reducing the variation of the angle of incidence and cross-talk on the detector for simple camera types. This rotationally symmetric telescope with no obscuration and using spherical surfaces and selected glass types fills a need in the range of short focal lengths. It can be used as a compact front unit for a matched spectrometer, as an ultra-broadband camera objective lens, or as the optics of an integrated camera/spectrometer in which the wavelength information is obtained by the use of strip or linear variable filters on the focal plane array. This kind of camera and spectrometer system can find applications in remote sensing, as well as in-situ applications for geological mapping and characterization of minerals, ecological studies, and target detection and identification through spectral signatures. Commercially, the lens can be used in quality-control applications via spectral analysis. The lens design is based on the rear landscape lens with the aperture stop in front of all elements. This allows sufficient room for telecentricity in addition to making the stop easily accessible. The crucial design features are the use of a doublet with an ultra-low dispersion glass (fluorite or S-FPL53), and the use of a strong negative element, which enables flat field and telecentricity in conjunction with the last (field lens) element. The field lens also can be designed to be in contact with the array, a feature that is desirable in some applications. The lens has a 20deg field of view, for a 50-mm focal length, and is corrected over the range of wavelengths of 450-2,300 nm. Transverse color, which is the most pernicious aberration for spectroscopic work, is controlled at the level of 1 m or below at 0.7 m field and 5 m at full field. The maximum chief ray angle is less than 1.7 , providing good telecentricity. An additional feature of this lens is that it is made exclusively with glasses that provide good transmission up to 2,300 nm and even some transmission to 2,500 nm; thus, the lens can be used in applications that cover the entire solar-reflected spectrum. Alternative realizations are possible that provide enhanced resolution and even less transverse color over a narrower wavelength range.
New generation of meteorology cameras
NASA Astrophysics Data System (ADS)
Janout, Petr; Blažek, Martin; Páta, Petr
2017-12-01
A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.
Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall
NASA Astrophysics Data System (ADS)
Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith
2013-05-01
The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.
Magnetic field effect on spoke behaviour
NASA Astrophysics Data System (ADS)
Hnilica, Jaroslav; Slapanska, Marta; Klein, Peter; Vasina, Petr
2016-09-01
The investigations of the non-reactive high power impulse magnetron sputtering (HiPIMS) discharge using high-speed camera imaging, optical emission spectroscopy and electrical probes showed that plasma is not homogeneously distributed over the target surface, but it is concentrated in regions of higher local plasma density called spokes rotating above the erosion racetrack. Magnetic field effect on spoke behaviour was studied by high-speed camera imaging in HiPIMS discharge using 3 inch titanium target. An employed camera enabled us to record two successive images in the same pulse with time delay of 3 μs between them, which allowed us to determine the number of spokes, spoke rotation velocity and spoke rotation frequency. The experimental conditions covered pressure range from 0.15 to 5 Pa, discharge current up to 350 A and magnetic fields of 37, 72 and 91 mT. Increase of the magnetic field influenced the number of spokes observed at the same pressure and at the same discharge current. Moreover, the investigation revealed different characteristic spoke shapes depending on the magnetic field strength - both diffusive and triangular shapes were observed for the same target material. The spoke rotation velocity was independent on the magnetic field strength. This research has been financially supported by the Czech Science Foundation in frame of the project 15-00863S.
Cloud photogrammetry with dense stereo for fisheye cameras
NASA Astrophysics Data System (ADS)
Beekmans, Christoph; Schneider, Johannes; Läbe, Thomas; Lennefer, Martin; Stachniss, Cyrill; Simmer, Clemens
2016-11-01
We present a novel approach for dense 3-D cloud reconstruction above an area of 10 × 10 km2 using two hemispheric sky imagers with fisheye lenses in a stereo setup. We examine an epipolar rectification model designed for fisheye cameras, which allows the use of efficient out-of-the-box dense matching algorithms designed for classical pinhole-type cameras to search for correspondence information at every pixel. The resulting dense point cloud allows to recover a detailed and more complete cloud morphology compared to previous approaches that employed sparse feature-based stereo or assumed geometric constraints on the cloud field. Our approach is very efficient and can be fully automated. From the obtained 3-D shapes, cloud dynamics, size, motion, type and spacing can be derived, and used for radiation closure under cloudy conditions, for example. Fisheye lenses follow a different projection function than classical pinhole-type cameras and provide a large field of view with a single image. However, the computation of dense 3-D information is more complicated and standard implementations for dense 3-D stereo reconstruction cannot be easily applied. Together with an appropriate camera calibration, which includes internal camera geometry, global position and orientation of the stereo camera pair, we use the correspondence information from the stereo matching for dense 3-D stereo reconstruction of clouds located around the cameras. We implement and evaluate the proposed approach using real world data and present two case studies. In the first case, we validate the quality and accuracy of the method by comparing the stereo reconstruction of a stratocumulus layer with reflectivity observations measured by a cloud radar and the cloud-base height estimated from a Lidar-ceilometer. The second case analyzes a rapid cumulus evolution in the presence of strong wind shear.
Dither and drizzle strategies for Wide Field Camera 3
NASA Astrophysics Data System (ADS)
Mutchler, Max
2010-07-01
Hubble's 20th anniversary observation of Herbig-Haro object HH 901 in the Carina Nebula is used to illustrate observing strategies and corresponding data reduction methods for the new Wide Field Camera 3 (WFC3), which was installed during Servicing Mission 4 in May 2009. The key issues for obtaining optimal results with offline Multidrizzle processing of WFC3 data sets are presented. These pragmatic instructions in "cookbook" format are designed to help new WFC3 users quickly obtain good results with similar data sets.
Onboard data processing and compression for a four-sensor suite: the SERENA experiment.
NASA Astrophysics Data System (ADS)
Mura, A.; Orsini, S.; Di Lellis, A.; Lazzarotto, F.; Barabash, S.; Livi, S.; Torkar, K.; Milillo, A.; De Angelis, E.
2013-09-01
SERENA (Search for Exospheric Refilling and Emitted Natural Abundances) is an instrument package that will fly on board the BepiColombo/Mercury Planetary Orbiter (MPO). SERENA instrument includes four units: ELENA (Emitted Low Energy Neutral Atoms), a neutral particle analyzer/imager to detect ion sputtering and backscattering from Mercury's surface; STROFIO (Start from a Rotating FIeld mass spectrometer), a mass spectrometer to identify atomic masses released from the surface; MIPA (Miniature Ion Precipitation Analyzer) and PICAM (Planetary Ion Camera), two ion spectrometers to monitor the precipitating solar wind and measure the plasma environment around Mercury. The System Control Unit architecture is such that all four sensors are connected to a high resolution FPGA, which dialogs with a dedicated high-performance data processing unit. The unpredictability of the data rate, due to the peculiarities of these investigations, leads to several possible scenarios for the data compression and handling. In this study we first discuss about the predicted data volume that comes from the optimized operation strategy, and then we report on the instrument data processing and compression.
Space imaging infrared optical guidance for autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu
2008-08-01
We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best focus image clearly and give the depth of the object from the infrared camera unit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Mars Science Laboratory Engineering Cameras
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.
2012-01-01
NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.
Mars Exploration Rover Navigation Camera in-flight calibration
NASA Astrophysics Data System (ADS)
Soderblom, Jason M.; Bell, James F.; Johnson, Jeffrey R.; Joseph, Jonathan; Wolff, Michael J.
2008-06-01
The Navigation Camera (Navcam) instruments on the Mars Exploration Rover (MER) spacecraft provide support for both tactical operations as well as scientific observations where color information is not necessary: large-scale morphology, atmospheric monitoring including cloud observations and dust devil movies, and context imaging for both the thermal emission spectrometer and the in situ instruments on the Instrument Deployment Device. The Navcams are a panchromatic stereoscopic imaging system built using identical charge-coupled device (CCD) detectors and nearly identical electronics boards as the other cameras on the MER spacecraft. Previous calibration efforts were primarily focused on providing a detailed geometric calibration in line with the principal function of the Navcams, to provide data for the MER navigation team. This paper provides a detailed description of a new Navcam calibration pipeline developed to provide an absolute radiometric calibration that we estimate to have an absolute accuracy of 10% and a relative precision of 2.5%. Our calibration pipeline includes steps to model and remove the bias offset, the dark current charge that accumulates in both the active and readout regions of the CCD, and the shutter smear. It also corrects pixel-to-pixel responsivity variations using flat-field images, and converts from raw instrument-corrected digital number values per second to units of radiance (W m-2 nm-1 sr-1), or to radiance factor (I/F). We also describe here the initial results of two applications where radiance-calibrated Navcam data provide unique information for surface photometric and atmospheric aerosol studies.
ERIC Educational Resources Information Center
Reynolds, Ronald F.
1984-01-01
Describes the basic components of a space telescope that will be launched during a 1986 space shuttle mission. These components include a wide field/planetary camera, faint object spectroscope, high-resolution spectrograph, high-speed photometer, faint object camera, and fine guidance sensors. Data to be collected from these instruments are…
4. INTERIOR VIEW OF CLUB HOUSE REFRIGERATION UNIT, SHOWING COOLING ...
4. INTERIOR VIEW OF CLUB HOUSE REFRIGERATION UNIT, SHOWING COOLING COILS AND CORK-LINED ROOM. CAMERA IS BETWEEN SEVEN AND EIGHT FEET ABOVE FLOOR LEVEL, FACING SOUTHEAST. - Swan Falls Village, Clubhouse 011, Snake River, Kuna, Ada County, ID
The development of large-aperture test system of infrared camera and visible CCD camera
NASA Astrophysics Data System (ADS)
Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying
2015-10-01
Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.
Automated Astrometric Analysis of Satellite Observations using Wide-field Imaging
NASA Astrophysics Data System (ADS)
Skuljan, J.; Kay, J.
2016-09-01
An observational trial was conducted in the South Island of New Zealand from 24 to 28 February 2015, as a collaborative effort between the United Kingdom and New Zealand in the area of space situational awareness. The aim of the trial was to observe a number of satellites in low Earth orbit using wide-field imaging from two separate locations, in order to determine the space trajectory and compare the measurements with the predictions based on the standard two-line elements. This activity was an initial step in building a space situational awareness capability at the Defence Technology Agency of the New Zealand Defence Force. New Zealand has an important strategic position as the last land mass that many satellites selected for deorbiting pass before entering the Earth's atmosphere over the dedicated disposal area in the South Pacific. A preliminary analysis of the trial data has demonstrated that relatively inexpensive equipment can be used to successfully detect satellites at moderate altitudes. A total of 60 satellite passes were observed over the five nights of observation and about 2600 images were collected. A combination of cooled CCD and standard DSLR cameras were used, with a selection of lenses between 17 mm and 50 mm in focal length, covering a relatively wide field of view of 25 to 60 degrees. The CCD cameras were equipped with custom-made GPS modules to record the time of exposure with a high accuracy of one millisecond, or better. Specialised software has been developed for automated astrometric analysis of the trial data. The astrometric solution is obtained as a two-dimensional least-squares polynomial fit to the measured pixel positions of a large number of stars (typically 1000) detected across the image. The star identification is fully automated and works well for all camera-lens combinations used in the trial. A moderate polynomial degree of 3 to 5 is selected to take into account any image distortions introduced by the lens. A typical RMS error of the least-squares fit is about 0.1 pixels, which corresponds to about 4 to 10 seconds of arc in the sky, depending on the pixel scale (field of view). This gives a typical uncertainty between 10 and 25 metres in measuring the position of a satellite at a characteristic range of 500 kilometres. The results of this trial have confirmed that wide-field measurements based on standard photographic equipment and using automated astrometric analysis techniques can be used to improve the current orbital models of satellites in low Earth orbit.
VizieR Online Data Catalog: Observation of six NSVS eclipsing binaries (Dimitrov+, 2015)
NASA Astrophysics Data System (ADS)
Dimitrov, D. P.; Kjurkchieva, D. P.
2017-11-01
We managed to separate a sample of about 40 ultrashort-period candidates from the Northern Sky Variability Survey (NSVS, Wozniak et al. 2004AJ....127.2436W) appropriate for follow-up observations at Rozhen observatory (δ>-10°). Follow-up CCD photometry of the targets in the VRI bands was carried out with the three telescopes of the Rozhen National Astronomical Observatory. The 2-m RCC telescope is equipped with a VersArray CCD camera (1340x1300 pixels, 20 μm/pixel, field of 5.35x5.25 arcmin2). The 60-cm Cassegrain telescope is equipped with a FLI PL09000 CCD camera (3056x3056 pixels, 12 μm/pixel, field of 17.1x17.1 arcmin2). The 50/70 cm Schmidt telescope has a field of view (FoV) of around 1° and is equipped with a FLI PL 16803 CCD camera, 4096x4096 pixels, 9 μm/pixel size. (4 data files).
Key, Douglas J
2014-07-01
This study incorporates concurrent thermal camera imaging as a means of both safely extending the length of each treatment session within skin surface temperature tolerances and to demonstrate not only the homogeneous nature of skin surface temperature heating but the distribution of that heating pattern as a reflection of localization of subcutaneous fat distribution. Five subjects were selected because of a desire to reduce abdomen and flank fullness. Full treatment field thermal camera imaging was captured at 15 minute intervals, specifically at 15, 30, and 45 minutes into active treatment with the purpose of monitoring skin temperature and avoiding any patterns of skin temperature excess. Peak areas of heating corresponded anatomically to the patients' areas of greatest fat excess ie, visible "pinchable" fat. Preliminary observation of high-resolution thermal camera imaging used concurrently with focused field RF therapy show peak skin heating patterns overlying the areas of greatest fat excess.
Background and imaging simulations for the hard X-ray camera of the MIRAX mission
NASA Astrophysics Data System (ADS)
Castro, M.; Braga, J.; Penacchioni, A.; D'Amico, F.; Sacahui, R.
2016-07-01
We report the results of detailed Monte Carlo simulations of the performance expected both at balloon altitudes and at the probable satellite orbit of a hard X-ray coded-aperture camera being developed for the Monitor e Imageador de RAios X (MIRAX) mission. Based on a thorough mass model of the instrument and detailed specifications of the spectra and angular dependence of the various relevant radiation fields at both the stratospheric and orbital environments, we have used the well-known package GEANT4 to simulate the instrumental background of the camera. We also show simulated images of source fields to be observed and calculated the detailed sensitivity of the instrument in both situations. The results reported here are especially important to researchers in this field considering that we provide important information, not easily found in the literature, on how to prepare input files and calculate crucial instrumental parameters to perform GEANT4 simulations for high-energy astrophysics space experiments.
Wide field NEO survey 1.0-m telescope with 10 2k×4k mosaic CCD camera
NASA Astrophysics Data System (ADS)
Isobe, Syuzo; Asami, Atsuo; Asher, David J.; Hashimoto, Toshiyasu; Nakano, Shi-ichi; Nishiyama, Kota; Ohshima, Yoshiaki; Terazono, Junya; Umehara, Hiroaki; Yoshikawa, Makoto
2002-12-01
We developed a new 1.0 m telescope with a 3 degree flat focal plane to which a mosaic CCD camera with 10 2k×4k chips is fixed. The system was set up in February 2002, and is now undergoing the final fine adjustments. Since the telescope has a focal length of 3 m, a field of 7.5 square degrees is covered in one image. In good seeing conditions, 1.5 arc seconds, at the site located in Bisei town, Okayama prefecture in Japan, we can expect to detect down to 20th magnitude stars with an exposure time of 60 seconds. Considering a read-out time, 46 seconds, of the CCD camera, one image is taken in every two minutes, and about 2,100 square degrees of field is expected to be covered in one clear night. This system is very effective for survey work, especially for Near-Earth-Asteroid detection.
NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Shayduk, M.
2017-01-01
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.
Evaluation of modified portable digital camera for screening of diabetic retinopathy.
Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi
2009-01-01
To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.
Evaluation of the MSFC facsimile camera system as a tool for extraterrestrial geologic exploration
NASA Technical Reports Server (NTRS)
Wolfe, E. W.; Alderman, J. D.
1971-01-01
Utility of the Marshall Space Flight (MSFC) facsimile camera system for extraterrestrial geologic exploration was investigated during the spring of 1971 near Merriam Crater in northern Arizona. Although the system with its present hard-wired recorder operates erratically, the imagery showed that the camera could be developed as a prime imaging tool for automated missions. Its utility would be enhanced by development of computer techniques that utilize digital camera output for construction of topographic maps, and it needs increased resolution for examining near field details. A supplementary imaging system may be necessary for hand specimen examination at low magnification.
Extended spectrum SWIR camera with user-accessible Dewar
NASA Astrophysics Data System (ADS)
Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva
2017-02-01
Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.
United States Homeland Security and National Biometric Identification
2002-04-09
security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are
NASA Astrophysics Data System (ADS)
Benni, P.
2017-06-01
(Abstract only) GPX is designed to search high density star fields where other surveys, such as WASP, HATNet, XO, and KELT would find challenging due to blending of transit like events. Using readily available amateur equipment, a survey telescope (Celestron RASA, 279 mm f/2.2, based in Acton, Massachusetts) was configured first with a SBIG ST-8300M camera then later upgraded to an FLI ML16200 camera and tested under different sampling scenarios with multiple image fields to obtain a 9- to 11-minute cadence per field. The resultant image resolution of GPX is about 2 arcsec/pixel compared to 13.7±23 arcsec/pixel of the aforementioned surveys and the future TESS space telescope exoplanet survey.
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...
1996-01-01
used to locate and characterize a magnetic dipole source, and this finding accelerated the development of superconducting tensor gradiometers for... superconducting magnetic field gradiometer, two-color infrared camera, synthetic aperture radar, and a visible spectrum camera. The combination of these...Pieter Hoekstra, Blackhawk GeoSciences ......................................... 68 Prediction for UXO Shape and Orientation Effects on Magnetic
Digital Video Cameras for Brainstorming and Outlining: The Process and Potential
ERIC Educational Resources Information Center
Unger, John A.; Scullion, Vicki A.
2013-01-01
This "Voices from the Field" paper presents methods and participant-exemplar data for integrating digital video cameras into the writing process across postsecondary literacy contexts. The methods and participant data are part of an ongoing action-based research project systematically designed to bring research and theory into practice…
Communities, Cameras, and Conservation
ERIC Educational Resources Information Center
Patterson, Barbara
2012-01-01
Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goggin, L; Kilby, W; Noll, M
2015-06-15
Purpose: A technique using a scintillator-mirror-camera system to measure MLC leakage was developed to provide an efficient alternative to film dosimetry while maintaining high spatial resolution. This work describes the technique together with measurement uncertainties. Methods: Leakage measurements were made for the InCise™ MLC using the Logos XRV-2020A device. For each measurement approximately 170 leakage and background images were acquired using optimized camera settings. Average background was subtracted from each leakage frame before filtering the integrated leakage image to replace anomalous pixels. Pixel value to dose conversion was performed using a calibration image. Mean leakage was calculated within an ROImore » corresponding to the primary beam, and maximum leakage was determined by binning the image into overlapping 1mm x 1mm ROIs. 48 measurements were performed using 3 cameras and multiple MLC-linac combinations in varying beam orientations, with each compared to film dosimetry. Optical and environmental influences were also investigated. Results: Measurement time with the XRV-2020A was 8 minutes vs. 50 minutes using radiochromic film, and results were available immediately. Camera radiation exposure degraded measurement accuracy. With a relatively undamaged camera, mean leakage agreed with film measurement to ≤0.02% in 92% cases, ≤0.03% in 100% (for maximum leakage the values were 88% and 96%) relative to reference open field dose. The estimated camera lifetime over which this agreement is maintained is at least 150 measurements, and can be monitored using reference field exposures. A dependency on camera temperature was identified and a reduction in sensitivity with distance from image center due to optical distortion was characterized. Conclusion: With periodic monitoring of the degree of camera radiation damage, the XRV-2020A system can be used to measure MLC leakage. This represents a significant time saving when compared to the traditional film-based approach without any substantial reduction in accuracy.« less
SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darne, C; Robertson, D; Alsanea, F
2016-06-15
Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2010-07-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27 × 27) mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4 × 4) imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5 × 0.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support. Over the past two years the LBC and the first LUCIFER instrument have been brought into routine scientific operation and MODS1 commissioning is set to begin in the fall of 2010.
NASA Astrophysics Data System (ADS)
Lynam, Jeff R.
2001-09-01
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
Earth Observations taken by the Expedition 14 crew
2006-11-07
ISS014-E-07480 (11 Nov. 2006) --- Dyess Air Force Base is featured in this image photographed by an Expedition 14 crewmember on the International Space Station. Dyess Air Force Base, located near the central Texas city of Abilene, is the home of the 7th Bomb Wing and 317th Airlift Groups of the United States Air Force. The Base also conducts all initial Air Force combat crew training for the B-1B Lancer aircraft. The main runway is approximately 5 kilometers in length to accommodate the large bombers and cargo aircraft at the base -- many of which are parked in parallel rows on the base tarmac. Lieutenant Colonel William E. Dyess, for whom the base is named, was a highly decorated pilot, squadron commander, and prisoner of war during World War II. The nearby town of Tye, Texas was established by the Texas and Pacific Railway in 1881, and expanded considerably following reactivation of a former air field as Dyess Air Force Base in 1956. Airfields and airports are useful sites for astronauts to hone their long camera lens photographic technique to acquire high resolution images. The sharp contrast between highly reflective linear features, such as runways, with darker agricultural fields and undisturbed land allows fine focusing of the cameras. This on-the-job training is key for obtaining high resolution imagery of Earth, as well as acquiring inspection photographs of space shuttle thermal protection tiles during continuing missions to the International Space Station.
Coded-aperture Compton camera for gamma-ray imaging
NASA Astrophysics Data System (ADS)
Farber, Aaron M.
This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
Development of a Compact & Easy-to-Use 3-D Camera for High Speed Turbulent Flow Fields
2013-12-05
resolved. Also, in the case of a single camera system, the use of an aperture greatly reduces the amount of collected light. The combination of these...a study on wall-bounded turbulence [Sheng_2006]. Nevertheless, these techniques are limited to small measurement volumes, while maintaining a high...It has also been adapted to kHz rates using high-speed cameras for aeroacoustic studies (see Violato et al. [17, 18]. Tomo-PIV, however, has some
Model deformation measurements at a cryogenic wind tunnel using photogrammetry
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility (NTF) is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. Data reduction procedures and the results of tunnel tests at the NTF are presented.
Model Deformation Measurements at a Cryogenic Wind Tunnel Using Photogrammetry
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1982-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility (NTF) is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. Data reduction procedures and the results of tunnel tests at the NTF are presented.
2014-05-07
View of the High Definition Earth Viewing (HDEV) flight assembly installed on the exterior of the Columbus European Laboratory module. Image was released by astronaut on Twitter. The High Definition Earth Viewing (HDEV) experiment places four commercially available HD cameras on the exterior of the space station and uses them to stream live video of Earth for viewing online. The cameras are enclosed in a temperature specific housing and are exposed to the harsh radiation of space. Analysis of the effect of space on the video quality, over the time HDEV is operational, may help engineers decide which cameras are the best types to use on future missions. High school students helped design some of the cameras' components, through the High Schools United with NASA to Create Hardware (HUNCH) program, and student teams operate the experiment.
Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.
2016-01-01
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054
Design of video interface conversion system based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Heng; Wang, Xiang-jun
2014-11-01
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Optical Indoor Positioning System Based on TFT Technology
Gőzse, István
2015-01-01
A novel indoor positioning system is presented in the paper. Similarly to the camera-based solutions, it is based on visual detection, but it conceptually differs from the classical approaches. First, the objects are marked by LEDs, and second, a special sensing unit is applied, instead of a camera, to track the motion of the markers. This sensing unit realizes a modified pinhole camera model, where the light-sensing area is fixed and consists of a small number of sensing elements (photodiodes), and it is the hole that can be moved. The markers are tracked by controlling the motion of the hole, such that the light of the LEDs always hits the photodiodes. The proposed concept has several advantages: Apart from its low computational demands, it is insensitive to the disturbing ambient light. Moreover, as every component of the system can be realized by simple and inexpensive elements, the overall cost of the system can be kept low. PMID:26712753