1998-01-05
The Interferometer Protein Crystal Growth (IPCG) experiment was designed to measure details of how protein molecules move through a fluid. It was flown on the STS-86 mission for use aboard Russian Space Station Mir in 1998. It studied aspects of how crystals grow - and what conditions lead to the best crystals, details that remain a mystery. IPCG produces interference patterns by spilitting then recombining laser light. This let scientists see how fluid densities - and molecular diffusion - change around a crystal as it grows in microgravity. The heart of the IPCG apparatus is the interferometer cell comprising the optical bench, microscope, other optics, and video camera. IPCG experiment cells are made of optical glass and silvered on one side to serve as a mirror in the interferometer system that visuzlizes crystals and conditions around them as they grow inside the cell. This diagram shows the optical layout. The principal investigator was Dr. Alexander McPherson of University of California, Irvine. Co-investigators are William Witherow and Dr. Marc Pusey of NASA's Marshall Space Flight Center (MSFC).
1998-01-05
The Interferometer Protein Crystal Growth (IPCG) experiment was designed to measure details of how protein molecules move through a fluid. It was flown on the STS-86 mission for use aboard Russian Space Station Mir in 1998. It studied aspects of how crystals grow - and what conditions lead to the best crystals, details that remain a mystery. IPCG produces interference patterns by spilitting then recombining laser light. This let scientists see how fluid densities - and molecular diffusion - change around a crystal as it grows in microgravity. The heart of the IPCG apparatus is the interferometer cell comprising the optical bench, microscope, other optics, and video camera. IPCG experiment cells are made of optical glass and silvered on one side to serve as a mirror in the interferometer system that visuzlizes crystals and conditions around them as they grow inside the cell. This view shows interferograms produced in ground tests. The principal investigator was Dr. Alexander McPherson of University of California, Irvine. Co-investigators are William Witherow and Dr. Marc Pusey of NASA's Marshall Space Flight Center (MSFC).
1998-01-05
The Interferometer Protein Crstal Growth (IPCG) experiment was designed to measure details of how protein molecules move through a fluid. It was flown on the STS-86 mission for use aboard Russin Space Station Mir in 1998. It studied aspects of how crystals grow - and what conditions lead to the best crystals, details that remain a mystery. IPCG produces interference patterns by splitting then recombining laser light. This let scientists see how fluid densities - and molecular diffusion - change around a crystal as it grows in microgravity. The heart of the IPCG apparatus is the interferometer cell comprising the optical bench, microscope, other optics, and video camera. IPCG experiment cells are made of optical glass and silvered on one side to serve as a mirror in the interferometer system that visualizes crystals and conditions around them as they grow inside the cell. This view shows the complete apparatus. The principal investigator was Dr. Alexander McPherson of the University of California, Irvin. Co-investigators are William Witherow and Dr. Marc Pusey of NASA's Marshall Space Flight Center
1998-01-05
The Interferometer Protein Crystal Growth (IPCG) experiment was designed to measure details of how protein molecules move through a fluid. It was flown on the STS-86 mission for use aboard Russian Space Station Mir in 1998. It studied aspects of how crystals grow - and what conditions lead to the best crystals, details that remain a mystery. IPCG produces interference patterns by spilitting then recombining laser light. This let scientists see how fluid densities - and molecular diffusion - change around a crystal as it grows in microgravity. The heart of the IPCG apparatus is the interferometer cell comprising the optical bench, microscope, other optics, and video camera. IPCG experiment cells are made of optical glass and silvered on one side to serve as a mirror in the interferometer system that visuzlizes crystals and conditions around them as they grow inside the cell. This diagram shows the growth cells. The principal investigator was Dr. Alexander McPherson of University of California, Irvine. Co-investigators are William Witherow and Dr. Marc Pusey of NASA's Marshall Space Flight Center (MSFC).
1998-01-05
The Interferometer Protein Crystal Growth (IPCG) experiment was designed to measure details of how protein molecules move through a fluid. It was flown on the STS-86 mission for use aboard Russian Space Station Mir in 1998. It studied aspects of how crystals grow - and what conditions lead to the best crystals, details that remain a mystery. IPCG produces interference patterns by spilitting then recombining laser light. This let scientists see how fluid densities - and molecular diffusion - change around a crystal as it grows in microgravity. The heart of the IPCG apparatus is the interferometer cell comprising the optical bench, microscope, other optics, and video camera. IPCG experiment cells are made of optical glass and silvered on one side to serve as a mirror in the interferometer system that visuzlizes crystals and conditions around them as they grow inside the cell. This view shows a large growth cell. The principal investigator was Dr. Alexander McPherson of University of California, Irvine. Co-investigators are William Witherow and Dr. Marc Pusey of NASA's Marshall Space Flight Center (MSFC).
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
NASA Technical Reports Server (NTRS)
Vaughan, O. H., Jr.
1990-01-01
Information on the data obtained from the Mesoscale Lightning Experiment flown on STS-26 is provided. The experiment used onboard TV cameras and a 35 mm film camera to obtain data. Data from the 35 mm camera are presented. During the mission, the crew had difficulty locating the various targets of opportunity with the TV cameras. To obtain as much data as possible in the short observational timeline allowed due to other commitments, the crew opted to use the hand-held 35 mm camera.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Noise and sensitivity of x-ray framing cameras at Nike (abstract)
NASA Astrophysics Data System (ADS)
Pawley, C. J.; Deniz, A. V.; Lehecka, T.
1999-01-01
X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.
ERIC Educational Resources Information Center
Ruiz, Michael J.
1982-01-01
The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…
NASA Technical Reports Server (NTRS)
Kuebert, E. J.
1977-01-01
A Laser Altimeter and Mapping Camera System was included in the Apollo Lunar Orbital Experiment Missions. The backup system, never used in the Apollo Program, is available for use in the Lidar Test Experiments on the STS Orbital Flight Tests 2 and 4. Studies were performed to assess the problem associated with installation and operation of the Mapping Camera System in the STS. They were conducted on the photographic capabilities of the Mapping Camera System, its mechanical and electrical interface with the STS, documentation, operation and survivability in the expected environments, ground support equipment, test and field support.
Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio
2014-11-01
We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2011-01-01
A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…
ERIC Educational Resources Information Center
Squibb, Matt
2009-01-01
This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)
Imaging experiment: The Viking Lander
Mutch, T.A.; Binder, A.B.; Huck, F.O.; Levinthal, E.C.; Morris, E.C.; Sagan, C.; Young, A.T.
1972-01-01
The Viking Lander Imaging System will consist of two identical facsimile cameras. Each camera has a high-resolution mode with an instantaneous field of view of 0.04??, and survey and color modes with instantaneous fields of view of 0.12??. Cameras are positioned one meter apart to provide stereoscopic coverage of the near-field. The Imaging Experiment will provide important information about the morphology, composition, and origin of the Martian surface and atmospheric features. In addition, lander pictures will provide supporting information for other experiments in biology, organic chemistry, meteorology, and physical properties. ?? 1972.
Engineer's drawing of Skylab 4 Far Ultraviolet Electronographic camera
1973-11-19
S73-36910 (November 1973) --- An engineer's drawing of the Skylab 4 Far Ultraviolet Electronographic camera (Experiment S201). Arrows point to various features and components of the camera. As the Comet Kohoutek streams through space at speeds of 100,000 miles per hour, the Skylab 4 crewmen will use the S201 UV camera to photograph features of the comet not visible from the Earth's surface. While the comet is some distance from the sun, the camera will be pointed through the scientific airlock in the wall of the Skylab space station Orbital Workshop (OWS). By using a movable mirror system built for the Ultraviolet Stellar Astronomy (S019) Experiment and rotating the space station, the S201 camera will be able to photograph the comet around the side of the space station. Photo credit: NASA
ERIC Educational Resources Information Center
Jeppsson, Fredrik; Frejd, Johanna; Lundmark, Frida
2017-01-01
This study focuses on investigating how students make use of their bodily experiences in combination with infrared (IR) cameras, as a way to make meaning in learning about heat, temperature, and friction. A class of 20 primary students (age 7-8 years), divided into three groups, took part in three IR camera laboratory experiments. The qualitative…
1991-04-03
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
1995-08-29
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
A multiple camera tongue switch for a child with severe spastic quadriplegic cerebral palsy.
Leung, Brian; Chau, Tom
2010-01-01
The present study proposed a video-based access technology that facilitated a non-contact tongue protrusion access modality for a 7-year-old boy with severe spastic quadriplegic cerebral palsy (GMFCS level 5). The proposed system featured a centre camera and two peripheral cameras to extend coverage of the frontal face view of this user for longer durations. The child participated in a descriptive case study. The participant underwent 3 months of tongue protrusion training while the multiple camera tongue switch prototype was being prepared. Later, the participant was brought back for five experiment sessions where he worked on a single-switch picture matching activity, using the multiple camera tongue switch prototype in a controlled environment. The multiple camera tongue switch achieved an average sensitivity of 82% and specificity of 80%. In three of the experiment sessions, the peripheral cameras were associated with most of the true positive switch activations. These activations would have been missed by a centre-camera-only setup. The study demonstrated proof-of-concept of a non-contact tongue access modality implemented by a video-based system involving three cameras and colour video processing.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
NASA Astrophysics Data System (ADS)
Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu
2015-04-01
For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.
Cameras in the Courtroom: A U.S. Survey. Journalism Monographs No. 60.
ERIC Educational Resources Information Center
White, Frank Wm.
Changes in the prohibition against cameras in state courtrooms are examined in this report. It provides a historical sketch of camera usage in the courtroom since 1935 and reports on the states permitting still, videotape, film cameras, and other electronic equipment in courtrooms since 1978, on the states now experimenting with the matter, and on…
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
2011-07-01
cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were
SHUTTLE - PAYLOADS (STS-41G) - KSC
1984-10-05
Payload canister transporter in Vertical Processing Facility Clean Room loaded with Earth Radiation Budget Experiment (ERBS), Large Format Camera (LFC), and Orbital Reservicing System (ORS) for STS-41G Mission. 1. STS-41G - EXPERIMENTS 2. CAMERAS - LFC KSC, FL Also available in 4x5 CN
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.
2014-06-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. We demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. We discuss some physical experiments, in which a person drinks hot, and warm, and cold water and he eats. After computer processing of images captured by passive THz camera TS4 we may see the pronounced temperature trace on skin of the human body. For proof of validity of our statement we make the similar physical experiment using the IR camera. Our investigation allows to increase field of the passive THz camera using for the detection of objects concealed in the human body because the difference in temperature between object and parts of human body will be reflected on the human skin. However, modern passive THz cameras have not enough resolution in a temperature to see this difference. That is why, we use computer processing to enhance the camera resolution for this application. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp.
High-Resolution Mars Camera Test Image of Moon Infrared
2005-09-13
This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.
STS-31 crew activity on the middeck of the Earth-orbiting Discovery, OV-103
1990-04-29
STS031-05-002 (24-29 April 1990) --- A 35mm camera with a "fish eye" lens captured this high angle image on Discovery's middeck. Astronaut Kathryn D. Sullivan works with the IMAX camera in foreground, while Astronaut Steven A. Hawley consults a checklist in corner. An Arriflex motion picture camera records student ion arc experiment in apparatus mounted on stowage locker. The experiment was the project of Gregory S. Peterson, currently a student at Utah State University.
Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy
NASA Technical Reports Server (NTRS)
1984-01-01
Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.
Automatic calibration method for plenoptic camera
NASA Astrophysics Data System (ADS)
Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao
2016-04-01
An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.
Camera Perspective Bias in Videotaped Confessions: Evidence that Visual Attention Is a Mediator
ERIC Educational Resources Information Center
Ware, Lezlee J.; Lassiter, G. Daniel; Patterson, Stephen M.; Ransom, Michael R.
2008-01-01
Several experiments have demonstrated a "camera perspective bias" in evaluations of videotaped confessions: videotapes with the camera focused on the suspect lead to judgments of greater voluntariness than alternative presentation formats. The present research investigated potential mediators of this bias. Using eye tracking to measure visual…
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
NASA Astrophysics Data System (ADS)
Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team
2018-01-01
A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory
NASA Technical Reports Server (NTRS)
Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.
2005-01-01
Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.
The High Definition Earth Viewing (HDEV) Payload
NASA Technical Reports Server (NTRS)
Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris
2017-01-01
The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.
Improving depth maps of plants by using a set of five cameras
NASA Astrophysics Data System (ADS)
Kaczmarek, Adam L.
2015-03-01
Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.
IET. Aerial view of snaptran destructive experiment in 1964. Camera ...
IET. Aerial view of snaptran destructive experiment in 1964. Camera facing north. Test cell building (TAN-624) is positioned away from coupling station. Weather tower in right foreground. Divided duct just beyond coupling station. Air intake structure on south side of shielded control room. Experiment is on dolly at coupling station. Date: 1964. INEEL negative no. 64-1736 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Foam Experiment Hardware are Flown on Microgravity Rocket MAXUS 4
NASA Astrophysics Data System (ADS)
Lockowandt, C.; Löth, K.; Jansson, O.; Holm, P.; Lundin, M.; Schneider, H.; Larsson, B.
2002-01-01
The Foam module was developed by Swedish Space Corporation and was used for performing foam experiments on the sounding rocket MAXUS 4 launched from Esrange 29 April 2001. The development and launch of the module has been financed by ESA. Four different foam experiments were performed, two aqueous foams by Doctor Michele Adler from LPMDI, University of Marne la Vallée, Paris and two non aqueous foams by Doctor Bengt Kronberg from YKI, Institute for Surface Chemistry, Stockholm. The foam was generated in four separate foam systems and monitored in microgravity with CCD cameras. The purpose of the experiment was to generate and study the foam in microgravity. Due to loss of gravity there is no drainage in the foam and the reactions in the foam can be studied without drainage. Four solutions with various stabilities were investigated. The aqueous solutions contained water, SDS (Sodium Dodecyl Sulphate) and dodecanol. The organic solutions contained ethylene glycol a cationic surfactant, cetyl trimethyl ammonium bromide (CTAB) and decanol. Carbon dioxide was used to generate the aqueous foam and nitrogen was used to generate the organic foam. The experiment system comprised four complete independent systems with injection unit, experiment chamber and gas system. The main part in the experiment system is the experiment chamber where the foam is generated and monitored. The chamber inner dimensions are 50x50x50 mm and it has front and back wall made of glass. The front window is used for monitoring the foam and the back window is used for back illumination. The front glass has etched crosses on the inside as reference points. In the bottom of the cell is a glass frit and at the top is a gas in/outlet. The foam was generated by injecting the experiment liquid in a glass frit in the bottom of the experiment chamber. Simultaneously gas was blown through the glass frit and a small amount of foam was generated. This procedure was performed at 10 bar. Then the pressure was lowered in the experiment chamber to approximately 0,1 bar to expand the foam to a dry foam that filled the experiment chamber. The foam was regenerated during flight by pressurise the cell and repeat the foam generation procedures. The module had 4 individual experiment chambers for the four different solutions. The four experiment chambers were controlled individually with individual experiment parameters and procedures. The gas system comprise on/off valves and adjustable valves to control the pressure and the gas flow and liquid flow during foam generation. The gas system can be divided in four sections, each section serving one experiment chamber. The sections are partly connected in two pairs with common inlet and outlet. The two pairs are supplied with a 1l gas bottle each filled to a pressure of 40 bar and a pressure regulator lowering the pressure from 40 bar to 10 bar. Two sections are connected to the same outlet. The gas outlets from the experiment chambers are connected to two symmetrical placed outlets on the outer structure with diffusers not to disturb the g-levels. The foam in each experiment chamber was monitored with one tomography camera and one overview camera (8 CCD cameras in total). The tomography camera is placed on a translation table which makes it possible to move it in the depth direction of the experiment chamber. The video signal from the 8 CCD cameras were stored onboard with two DV recorders. Two video signals were also transmitted to ground for real time evaluation and operation of the experiment. Which camera signal that was transmitted to ground could be selected with telecommands. With help of the tomography system it was possible to take sequences of images of the foam at different depths in the foam. This sequences of images are used for constructing a 3-D model of the foam after flight. The overview camera has a fixed position and a field of view that covers the total experiment chamber. This camera is used for monitoring the generation of foam and the overall behaviour of the foam. The experiment was performed successfully with foam generation in all 4 experiment chambers. Foam was also regenerated during flight with telecommands. The experiment data is under evaluation.
Soft x-ray streak camera for laser fusion applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stradling, G.L.
This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV aremore » also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...
2015-08-13
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
CAMERA: An integrated strategy for compound spectra extraction and annotation of LC/MS data sets
Kuhl, Carsten; Tautenhahn, Ralf; Böttcher, Christoph; Larson, Tony R.; Neumann, Steffen
2013-01-01
Liquid chromatography coupled to mass spectrometry is routinely used for metabolomics experiments. In contrast to the fairly routine and automated data acquisition steps, subsequent compound annotation and identification require extensive manual analysis and thus form a major bottle neck in data interpretation. Here we present CAMERA, a Bioconductor package integrating algorithms to extract compound spectra, annotate isotope and adduct peaks, and propose the accurate compound mass even in highly complex data. To evaluate the algorithms, we compared the annotation of CAMERA against a manually defined annotation for a mixture of known compounds spiked into a complex matrix at different concentrations. CAMERA successfully extracted accurate masses for 89.7% and 90.3% of the annotatable compounds in positive and negative ion mode, respectively. Furthermore, we present a novel annotation approach that combines spectral information of data acquired in opposite ion modes to further improve the annotation rate. We demonstrate the utility of CAMERA in two different, easily adoptable plant metabolomics experiments, where the application of CAMERA drastically reduced the amount of manual analysis. PMID:22111785
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…
LPT. Low power test (TAN640) interior. Basement level. Camera facing ...
LPT. Low power test (TAN-640) interior. Basement level. Camera facing north. Cable trays and conduit cross tunnel between critical experiment cell and critical experiment control room. Construction 93% complete. Photographer: Jack L. Anderson. Date: October 23, 1957. INEEL negative no. 57-5339 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2017-05-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. For this purpose, we propose to use THz camera and IR camera. Below we continue a possibility of IR camera using for a detection of temperature trace on a human body. In contrast to passive THz camera using, the IR camera does not allow to see very pronounced the object under clothing. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To find possible ways for this disadvantage overcoming we make some experiments with IR camera, produced by FLIR Company and develop novel approach for computer processing of images captured by IR camera. It allows us to increase a temperature resolution of IR camera as well as human year effective susceptibility enhancing. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments are made with observing of temperature trace from objects placed behind think overall. Demonstrated results are very important for the detection of forbidden objects, concealed inside the human body, by using non-destructive control without using X-rays.
2014-05-07
View of the High Definition Earth Viewing (HDEV) flight assembly installed on the exterior of the Columbus European Laboratory module. Image was released by astronaut on Twitter. The High Definition Earth Viewing (HDEV) experiment places four commercially available HD cameras on the exterior of the space station and uses them to stream live video of Earth for viewing online. The cameras are enclosed in a temperature specific housing and are exposed to the harsh radiation of space. Analysis of the effect of space on the video quality, over the time HDEV is operational, may help engineers decide which cameras are the best types to use on future missions. High school students helped design some of the cameras' components, through the High Schools United with NASA to Create Hardware (HUNCH) program, and student teams operate the experiment.
Experiments with synchronized sCMOS cameras
NASA Astrophysics Data System (ADS)
Steele, Iain A.; Jermak, Helen; Copperwheat, Chris M.; Smith, Robert J.; Poshyachinda, Saran; Soonthorntham, Boonrucksar
2016-07-01
Scientific-CMOS (sCMOS) cameras can combine low noise with high readout speeds and do not suffer the charge multiplication noise that effectively reduces the quantum efficiency of electron multiplying CCDs by a factor 2. As such they have strong potential in fast photometry and polarimetry instrumentation. In this paper we describe the results of laboratory experiments using a pair of commercial off the shelf sCMOS cameras based around a 4 transistor per pixel architecture. In particular using a both stable and a pulsed light sources we evaluate the timing precision that may be obtained when the cameras readouts are synchronized either in software or electronically. We find that software synchronization can introduce an error of 200-msec. With electronic synchronization any error is below the limit ( 50-msec) of our simple measurement technique.
Speech versus manual control of camera functions during a telerobotic task
NASA Technical Reports Server (NTRS)
Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.
1993-01-01
This investigation has evaluated the voice-commanded camera control concept. For this particular task, total voice control of continuous and discrete camera functions was significantly slower than manual control. There was no significant difference between voice and manual input for several types of errors. There was not a clear trend in subjective preference of camera command input modality. Task performance, in terms of both accuracy and speed, was very similar across both levels of experience.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST
1993-12-06
S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
McIntosh, Benjamin Patrick
Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.
An Inexpensive Digital Infrared Camera
ERIC Educational Resources Information Center
Mills, Allan
2012-01-01
Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
High spatial resolution infrared camera as ISS external experiment
NASA Astrophysics Data System (ADS)
Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan
High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.
GEMINI-TITAN (GT)-11 - MISC. EXPERIMENTS - MSC
1966-03-22
S66-02611 (22 March 1966) --- Gemini-11 Experiment S-13 Ultraviolet Astronomical Camera. It will be used to test the techniques of ultraviolet photography under vacuum conditions and obtain ultraviolet radiation observations of stars in wave length region of 2,000 to 4,000 Angstroms by spectral means. Equipment is the Maurer 70mm camera with UV lens (f3.3) and magazine, objective grating and objective prism, extended shuttle actuator, and mounting bracket. For the experiment, the camera is mounted on the centerline torque box to point through the opened right-hand hatch. Propellant expenditure is estimated at 4.5 pounds per night pass. Two night passes will be used to photograph probably six star fields. Sponsors are NASA's Office of Space Science and Applications and Northwestern University. Photo credit: NASA
NASA Astrophysics Data System (ADS)
Shao, Xinxing; Zhu, Feipeng; Su, Zhilong; Dai, Xiangjun; Chen, Zhenning; He, Xiaoyuan
2018-03-01
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μɛ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras
NASA Astrophysics Data System (ADS)
Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich
2018-01-01
A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.
Using hacked point and shoot cameras for time-lapse snow cover monitoring in an Alpine valley
NASA Astrophysics Data System (ADS)
Weijs, S. V.; Diebold, M.; Mutzner, R.; Golay, J. R.; Parlange, M. B.
2012-04-01
In Alpine environments, monitoring snow cover is essential get insight in the hydrological processes and water balance. Although measurement techniques based on LIDAR are available, their cost is often a restricting factor. In this research, an experiment was done using a distributed array of cheap consumer cameras to get insight in the spatio-temporal evolution of snowpack. Two experiments are planned. The first involves the measurement of eolic snow transport around a hill, to validate a snow saltation model. The second monitors the snowmelt during the melting season, which can then be combined with data from a wireless network of meteorological stations and discharge measurements at the outlet of the catchment. The poster describes the hardware and software setup, based on an external timer circuit and CHDK, the Canon Hack Development Kit. This latter is a flexible and developing software package, released under a GPL license. It was developed by hackers that reverse engineered the firmware of the camera and added extra functionality such as raw image output, more full control of the camera, external trigger and motion detection, and scripting. These features make it a great tool for geosciences. Possible other applications involve aerial stereo photography, monitoring vegetation response. We are interested in sharing experiences and brainstorming about new applications. Bring your camera!
X-ray pinhole camera setups used in the Atomki ECR Laboratory for plasma diagnostics.
Rácz, R; Biri, S; Pálinkás, J; Mascali, D; Castro, G; Caliri, C; Romano, F P; Gammino, S
2016-02-01
Imaging of the electron cyclotron resonance (ECR) plasmas by using CCD camera in combination with a pinhole is a non-destructive diagnostics method to record the strongly inhomogeneous spatial density distribution of the X-ray emitted by the plasma and by the chamber walls. This method can provide information on the location of the collisions between warm electrons and multiple charged ions/atoms, opening the possibility to investigate the direct effect of the ion source tuning parameters to the plasma structure. The first successful experiment with a pinhole X-ray camera was carried out in the Atomki ECR Laboratory more than 10 years ago. The goal of that experiment was to make the first ECR X-ray photos and to carry out simple studies on the effect of some setting parameters (magnetic field, extraction, disc voltage, gas mixing, etc.). Recently, intensive efforts were taken to investigate now the effect of different RF resonant modes to the plasma structure. Comparing to the 2002 experiment, this campaign used wider instrumental stock: CCD camera with a lead pinhole was placed at the injection side allowing X-ray imaging and beam extraction simultaneously. Additionally, Silicon Drift Detector (SDD) and High Purity Germanium (HPGe) detectors were installed to characterize the volumetric X-ray emission rate caused by the warm and hot electron domains. In this paper, detailed comparison study on the two X-ray camera and detector setups and also on the technical and scientific goals of the experiments is presented.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-05-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. In previous papers, we demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. For proof of validity of our statement we make the similar physical experiment using the IR camera. We show a possibility of temperature trace on human body skin, caused by changing of temperature inside the human body due to water drinking. We use as a computer code that is available for treatment of images captured by commercially available IR camera, manufactured by Flir Corp., as well as our developed computer code for computer processing of these images. Using both codes we demonstrate clearly changing of human body skin temperature induced by water drinking. Shown phenomena are very important for the detection of forbidden samples and substances concealed inside the human body using non-destructive control without X-rays using. Early we have demonstrated such possibility using THz radiation. Carried out experiments can be used for counter-terrorism problem solving. We developed original filters for computer processing of images captured by IR cameras. Their applications for computer processing of images results in a temperature resolution enhancing of cameras.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
W-Band Free Space Permittivity Measurement Setup for Candidate Radome Materials
NASA Technical Reports Server (NTRS)
Fralick, Dion T.
1997-01-01
This paper presents a measurement system used for w-band complex permittivity measurements performed in NASA Langley Research Center's Electromagnetics Research Branch. The system was used to characterize candidate radome materials for the passive millimeter wave (PMMW) camera experiment. The PMMW camera is a new technology sensor, with goals of all-weather landings of civilian and military aircraft. The sensor is being developed under a NASA Technology Reinvestment program with TRW, McDonnell- Douglas, Honeywell, and Composite Optics, Inc. as participants. The experiment is scheduled to be flight tested on the Air Force's 'Speckled Trout' aircraft in late 1997. The camera operates at W-band, in a radiometric capacity and generates an image of the viewable field. Because the camera is a radiometer, the system is very sensitive to losses. Minimal transmission loss through the radome at the operating frequency, 89 GHz, was critical to the success of the experiment. This paper details the design, set-up, calibration and operation of a free space measurement system developed and used to characterize the candidate radome materials for this program.
ASTRONAUT COOPER, GORDON L. - TRAINING - MERCURY-ATLAS (MA)-9 - CAMERA
1963-03-01
S63-03952 (1963) --- Astronaut L. Gordon Cooper Jr. explains the 16mm handheld spacecraft camera to his backup pilot astronaut Alan Shepard. The camera, designed by J.R. Hereford of McDonnell Aircraft Corp., will be used by Cooper during the Mercury-Atlas 9 (MA-9) mission to photograph experiments in space for M.I.T. and the Weather Bureau. Photo credit: NASA
Seeing in a different light—using an infrared camera to teach heat transfer and optical phenomena
NASA Astrophysics Data System (ADS)
Pei Wong, Choun; Subramaniam, R.
2018-05-01
The infrared camera is a useful tool in physics education to ‘see’ in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.
Seeing in a Different Light--Using an Infrared Camera to Teach Heat Transfer and Optical Phenomena
ERIC Educational Resources Information Center
Wong, Choun Pei; Subramaniam, R.
2018-01-01
The infrared camera is a useful tool in physics education to 'see' in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Douglas, B. C.; Walls, D. M.
1974-01-01
Laser and camera data taken during the International Satellite Geodesy Experiment (ISAGEX) were used in dynamical solutions to obtain center-of-mass coordinates for the Astro-Soviet camera sites at Helwan, Egypt, and Oulan Bator, Mongolia, as well as the East European camera sites at Potsdam, German Democratic Republic, and Ondrejov, Czechoslovakia. The results are accurate to about 20m in each coordinate. The orbit of PEOLE (i=15) was also determined from ISAGEX data. Mean Kepler elements suitable for geodynamic investigations are presented.
Low Cost Wireless Network Camera Sensors for Traffic Monitoring
DOT National Transportation Integrated Search
2012-07-01
Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...
Camera Ready to Install on Mars Reconnaissance Orbiter
2005-01-07
A telescopic camera called the High Resolution Imaging Science Experiment, or HiRISE, right was installed onto the main structure of NASA Mars Reconnaissance Orbiter left on Dec. 11, 2004 at Lockheed Martin Space Systems, Denver.
NASA Astrophysics Data System (ADS)
Taggart, D. P.; Gribble, R. J.; Bailey, A. D., III; Sugimoto, S.
Recently, a prototype soft x ray pinhole camera was fielded on FRX-C/LSM at Los Alamos and TRX at Spectra Technology. The soft x ray FRC images obtained using this camera stand out in high contrast to their surroundings. It was particularly useful for studying the FRC during and shortly after formation when, at certain operating conditions, flute-like structures at the edge and internal structures of the FRC were observed which other diagnostics could not resolve. Building on this early experience, a new soft x ray pinhole camera was installed on FRX-C/LSM, which permits more rapid data acquisition and briefer exposures. It will be used to continue studying FRC formation and to look for internal structure later in time which could be a signature of instability. The initial operation of this camera is summarized.
Soft X-ray streak camera for laser fusion applications
NASA Astrophysics Data System (ADS)
Stradling, G. L.
1981-04-01
The development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development is reviewed as well as laser fusion and laser fusion diagnostics. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.
Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout
NASA Technical Reports Server (NTRS)
Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.
1997-01-01
The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.
Timing generator of scientific grade CCD camera and its implementation based on FPGA technology
NASA Astrophysics Data System (ADS)
Si, Guoliang; Li, Yunfei; Guo, Yongfei
2010-10-01
The Timing Generator's functions of Scientific Grade CCD Camera is briefly presented: it generates various kinds of impulse sequence for the TDI-CCD, video processor and imaging data output, acting as the synchronous coordinator for time in the CCD imaging unit. The IL-E2TDI-CCD sensor produced by DALSA Co.Ltd. use in the Scientific Grade CCD Camera. Driving schedules of IL-E2 TDI-CCD sensor has been examined in detail, the timing generator has been designed for Scientific Grade CCD Camera. FPGA is chosen as the hardware design platform, schedule generator is described with VHDL. The designed generator has been successfully fulfilled function simulation with EDA software and fitted into XC2VP20-FF1152 (a kind of FPGA products made by XILINX). The experiments indicate that the new method improves the integrated level of the system. The Scientific Grade CCD camera system's high reliability, stability and low power supply are achieved. At the same time, the period of design and experiment is sharply shorted.
InfraCAM (trade mark): A Hand-Held Commercial Infrared Camera Modified for Spaceborne Applications
NASA Technical Reports Server (NTRS)
Manitakos, Daniel; Jones, Jeffrey; Melikian, Simon
1996-01-01
In 1994, Inframetrics introduced the InfraCAM(TM), a high resolution hand-held thermal imager. As the world's smallest, lightest and lowest power PtSi based infrared camera, the InfraCAM is ideal for a wise range of industrial, non destructive testing, surveillance and scientific applications. In addition to numerous commercial applications, the light weight and low power consumption of the InfraCAM make it extremely valuable for adaptation to space borne applications. Consequently, the InfraCAM has been selected by NASA Lewis Research Center (LeRC) in Cleveland, Ohio, for use as part of the DARTFire (Diffusive and Radiative Transport in Fires) space borne experiment. In this experiment, a solid fuel is ignited in a low gravity environment. The combustion period is recorded by both visible and infrared cameras. The infrared camera measures the emission from polymethyl methacrylate, (PMMA) and combustion products in six distinct narrow spectral bands. Four cameras successfully completed all qualification tests at Inframetrics and at NASA Lewis. They are presently being used for ground based testing in preparation for space flight in the fall of 1995.
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Low-cost digital dynamic visualization system
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
1995-05-01
High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.
IMAX and Nikon Camera Sensor Cleaning
2015-01-25
ISS042E182382 (01/25/2015) ---US astronaut Barry "Butch" Wilmore inspects one the cameras aboard the International Space Station Jan. 25, 2015, in preparation for another photo session of station experiments. Barry is the Commander of Expedition 42.
A simple demonstration when studying the equivalence principle
NASA Astrophysics Data System (ADS)
Mayer, Valery; Varaksina, Ekaterina
2016-06-01
The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.
Attitude identification for SCOLE using two infrared cameras
NASA Technical Reports Server (NTRS)
Shenhar, Joram
1991-01-01
An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.
Real-time vehicle matching for multi-camera tunnel surveillance
NASA Astrophysics Data System (ADS)
Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried
2011-03-01
Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.
Effect of image scaling on stereoscopic movie experience
NASA Astrophysics Data System (ADS)
Häkkinen, Jukka P.; Hakala, Jussi; Hannuksela, Miska; Oittinen, Pirkko
2011-03-01
Camera separation affects the perceived depth in stereoscopic movies. Through control of the separation and thereby the depth magnitudes, the movie can be kept comfortable but interesting. In addition, the viewing context has a significant effect on the perceived depth, as a larger display and longer viewing distances also contribute to an increase in depth. Thus, if the content is to be viewed in multiple viewing contexts, the depth magnitudes should be carefully planned so that the content always looks acceptable. Alternatively, the content can be modified for each viewing situation. To identify the significance of changes due to the viewing context, we studied the effect of stereoscopic camera base distance on the viewer experience in three different situations: 1) small sized video and a viewing distance of 38 cm, 2) television and a viewing distance of 158 cm, and 3) cinema and a viewing distance of 6-19 meters. We examined three different animations with positive parallax. The results showed that the camera distance had a significant effect on the viewing experience in small display/short viewing distance situations, in which the experience ratings increased until the maximum disparity in the scene was 0.34 - 0.45 degrees of visual angle. After 0.45 degrees, increasing the depth magnitude did not affect the experienced quality ratings. Interestingly, changes in the camera distance did not affect the experience ratings in the case of television or cinema if the depth magnitudes were below one degree of visual angle. When the depth was greater than one degree, the experience ratings began to drop significantly. These results indicate that depth magnitudes have a larger effect on the viewing experience with a small display. When a stereoscopic movie is viewed from a larger display, other experiences might override the effect of depth magnitudes.
X-ray pinhole camera setups used in the Atomki ECR Laboratory for plasma diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rácz, R., E-mail: rracz@atomki.hu; Biri, S.; Pálinkás, J.
Imaging of the electron cyclotron resonance (ECR) plasmas by using CCD camera in combination with a pinhole is a non-destructive diagnostics method to record the strongly inhomogeneous spatial density distribution of the X-ray emitted by the plasma and by the chamber walls. This method can provide information on the location of the collisions between warm electrons and multiple charged ions/atoms, opening the possibility to investigate the direct effect of the ion source tuning parameters to the plasma structure. The first successful experiment with a pinhole X-ray camera was carried out in the Atomki ECR Laboratory more than 10 years ago.more » The goal of that experiment was to make the first ECR X-ray photos and to carry out simple studies on the effect of some setting parameters (magnetic field, extraction, disc voltage, gas mixing, etc.). Recently, intensive efforts were taken to investigate now the effect of different RF resonant modes to the plasma structure. Comparing to the 2002 experiment, this campaign used wider instrumental stock: CCD camera with a lead pinhole was placed at the injection side allowing X-ray imaging and beam extraction simultaneously. Additionally, Silicon Drift Detector (SDD) and High Purity Germanium (HPGe) detectors were installed to characterize the volumetric X-ray emission rate caused by the warm and hot electron domains. In this paper, detailed comparison study on the two X-ray camera and detector setups and also on the technical and scientific goals of the experiments is presented.« less
ERIC Educational Resources Information Center
Northcote, Maria
2011-01-01
Digital cameras are now commonplace in many classrooms and in the lives of many children in early childhood centres and primary schools. They are regularly used by adults and teachers for "saving special moments and documenting experiences." The use of previously expensive photographic and recording equipment has often remained in the domain of…
Engineering design criteria for an image intensifier/image converter camera
NASA Technical Reports Server (NTRS)
Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.
1976-01-01
The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.
Teacher-in-Space Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40668 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Photo credit: NASA
Sambot II: A self-assembly modular swarm robot
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan
2018-04-01
The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
Raspberry Pi camera with intervalometer used as crescograph
NASA Astrophysics Data System (ADS)
Albert, Stefan; Surducan, Vasile
2017-12-01
The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.
NASA Astrophysics Data System (ADS)
Niemeyer, F.; Schima, R.; Grenzdörffer, G.
2013-08-01
Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.
MS Musgrave conducts CFES experiment on middeck
1983-04-09
STS006-03-381 (4-9 April 1983) --- Astronaut F. Story Musgrave, STS-6 mission specialist, monitors the activity of a sample in the continuous flow electrophoresis system (CFES) aboard the Earth-orbiting space shuttle Challenger. Dr. Musgrave is in the middeck area of the spacecraft. He has mounted a 35mm camera to record the activity through the window of the experiment. This frame was also photographed with a 35mm camera. Photo credit: NASA
Low Noise Camera for Suborbital Science Applications
NASA Technical Reports Server (NTRS)
Hyde, David; Robertson, Bryan; Holloway, Todd
2015-01-01
Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.
Digest of NASA earth observation sensors
NASA Technical Reports Server (NTRS)
Drummond, R. R.
1972-01-01
A digest of technical characteristics of remote sensors and supporting technological experiments uniquely developed under NASA Applications Programs for Earth Observation Flight Missions is presented. Included are camera systems, sounders, interferometers, communications and experiments. In the text, these are grouped by types, such as television and photographic cameras, lasers and radars, radiometers, spectrometers, technology experiments, and transponder technology experiments. Coverage of the brief history of development extends from the first successful earth observation sensor aboard Explorer 7 in October, 1959, through the latest funded and flight-approved sensors under development as of October 1, 1972. A standard resume format is employed to normalize and mechanize the information presented.
Koziol, Anna; Bordessoule, Michel; Ciavardini, Alessandra; Dawiec, Arkadiusz; Da Silva, Paulo; Desjardins, Kewin; Grybos, Pawel; Kanoute, Brahim; Laulhe, Claire; Maj, Piotr; Menneglier, Claude; Mercere, Pascal; Orsini, Fabienne; Szczygiel, Robert
2018-03-01
This paper presents the performance of a single-photon-counting hybrid pixel X-ray detector with synchrotron radiation. The camera was evaluated with respect to time-resolved experiments, namely pump-probe-probe experiments held at SOLEIL. The UFXC camera shows very good energy resolution of around 1.5 keV and allows the minimum threshold setting to be as low as 3 keV keeping the high-count-rate capabilities. Measurements of a synchrotron characteristic filling mode prove the proper separation of an isolated bunch of photons and the usability of the detector in time-resolved experiments.
The ideal subject distance for passport pictures.
Verhoff, Marcel A; Witzel, Carsten; Kreutz, Kerstin; Ramsthaler, Frank
2008-07-04
In an age of global combat against terrorism, the recognition and identification of people on document images is of increasing significance. Experiments and calculations have shown that the camera-to-subject distance - not the focal length of the lens - can have a significant effect on facial proportions. Modern passport pictures should be able to function as a reference image for automatic and manual picture comparisons. This requires a defined subject distance. It is completely unclear which subject distance, in the taking of passport photographs, is ideal for the recognition of the actual person. We show here that the camera-to-subject distance that is perceived as ideal is dependent on the face being photographed, even if the distance of 2m was most frequently preferred. So far the problem of the ideal camera-to-subject distance for faces has only been approached through technical calculations. We have, for the first time, answered this question experimentally with a double-blind experiment. Even if there is apparently no ideal camera-to-subject distance valid for every face, 2m can be proposed as ideal for the taking of passport pictures. The first step would actually be the determination of a camera-to-subject distance for the taking of passport pictures within the standards. From an anthropological point of view it would be interesting to find out which facial features allow the preference of a shorter camera-to-subject distance and which allow the preference of a longer camera-to-subject distance.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng
2016-10-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
Brownian Movement and Avogadro's Number: A Laboratory Experiment.
ERIC Educational Resources Information Center
Kruglak, Haym
1988-01-01
Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Sub-picosecond streak camera measurements at LLNL: From IR to x-rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuba, J; Shepherd, R; Booth, R
An ultra fast, sub-picosecond resolution streak camera has been recently developed at the LLNL. The camera is a versatile instrument with a wide operating wavelength range. The temporal resolution of up to 300 fs can be achieved, with routine operation at 500 fs. The streak camera has been operated in a wide wavelength range from IR to x-rays up to 2 keV. In this paper we briefly review the main design features that result in the unique properties of the streak camera and present its several scientific applications: (1) Streak camera characterization using a Michelson interferometer in visible range, (2)more » temporally resolved study of a transient x-ray laser at 14.7 nm, which enabled us to vary the x-ray laser pulse duration from {approx}2-6 ps by changing the pump laser parameters, and (3) an example of a time-resolved spectroscopy experiment with the streak camera.« less
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Computational Studies of X-ray Framing Cameras for the National Ignition Facility
2013-06-01
Livermore National Laboratory 7000 East Avenue Livermore, CA 94550 USA Abstract The NIF is the world’s most powerful laser facility and is...a phosphor screen where the output is recorded. The x-ray framing cameras have provided excellent information. As the yields at NIF have increased...experiments on the NIF . The basic operation of these cameras is shown in Fig. 1. Incident photons generate photoelectrons both in the pores of the MCP and
Positron emission particle tracking using a modular positron camera
NASA Astrophysics Data System (ADS)
Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.
2009-06-01
The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
ERIC Educational Resources Information Center
Physics Education, 1984
1984-01-01
Describes: (1) experiments using a simple phonocardiograph; (2) radioactivity experiments involving a VELA used as a ratemeter; (3) a 25cm continuously operating Foucault pendulum; and (4) camera control of experiments. Descriptions of equipment needed are provided when applicable. (JN)
Plate refractive camera model and its applications
NASA Astrophysics Data System (ADS)
Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai
2017-03-01
In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han
2016-01-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573
Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera
1993-12-05
S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
1990-07-01
electrohtic dissociation of the electrode mate- pedo applications seem to be still somewhat rial, and to provide a good gas evolution wlhich out of the...rod cathode. A unique feature of this preliminary experiment was the use of a prototype gated, intensified video camera. This camera is based on a...microprocessor controlled microchannel plate intensifier tube. The intensifier tube image is focused on a standard CCD video camera so that the object
Analysis of Brown camera distortion model
NASA Astrophysics Data System (ADS)
Nowakowski, Artur; Skarbek, Władysław
2013-10-01
Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.
"Teacher in Space" Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40670 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe and Barbara R. Morgan (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. McAuliffe zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA
Teacher-in-Space Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40669 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe (left) and Barbara R. Morgan have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedure for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan adjusts a lens as a studious McAuliffe looks on. Photo credit: NASA
"Teacher in Space" Trainees - Arriflex Motion Picture Camera
1985-09-20
S85-40671 (18 Sept. 1985) --- The two teachers, Barbara R. Morgan and Sharon Christa McAuliffe (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA
Color film spectral properties test experiment for target simulation
NASA Astrophysics Data System (ADS)
Liu, Xinyue; Ming, Xing; Fan, Da; Guo, Wenji
2017-04-01
In hardware-in-loop test of the aviation spectra camera, the liquid crystal light valve and digital micro-mirror device could not simulate the spectrum characteristics of the landmark. A test system frame was provided based on the color film for testing the spectra camera; and the spectrum characteristics of the color film was test in the paper. The result of the experiment shows that difference was existed between the landmark and the film spectrum curse. However, the spectrum curse peak should change according to the color, and the curse is similar with the standard color traps. So, if the quantity value of error between the landmark and the film was calibrated and the error could be compensated, the film could be utilized in the hardware-in-loop test for the aviation spectra camera.
Identifying People with Soft-Biometrics at Fleet Week
2013-03-01
onboard sensors. This included: Color Camera: Located in the right eye, Octavia stored 640x480 RGB images at ~4 Hz from a Point Grey Firefly camera. A...Face Detection The Fleet Week experiments demonstrated the potential of soft biometrics for recognition, but all of the existing algorithms currently
Directing Performers for the Cameras.
ERIC Educational Resources Information Center
Wilson, George P., Jr.
An excellent way for an undergraduate, novice director of television and film to pick up background experience in directing performers for cameras is by participating in nonbroadcast-film activities, such as theatre, dance, and variety acts, both as performer and as director. This document describes the varieties of activities, including creative,…
Making Connections with Digital Data
ERIC Educational Resources Information Center
Leonard, William; Bassett, Rick; Clinger, Alicia; Edmondson, Elizabeth; Horton, Robert
2004-01-01
State-of-the-art digital cameras open up enormous possibilities in the science classroom, especially when used as data collectors. Because most high school students are not fully formal thinkers, the digital camera can provide a much richer learning experience than traditional observation. Data taken through digital images can make the…
The research on calibration methods of dual-CCD laser three-dimensional human face scanning system
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong
2013-09-01
In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
Construct and face validity of a virtual reality-based camera navigation curriculum.
Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J
2012-10-01
Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.
High-resolution continuum observations of the Sun
NASA Technical Reports Server (NTRS)
Zirin, Harold
1987-01-01
The aim of the PFI or photometric filtergraph instrument is to observe the Sun in the continuum with as high resolution as possible and utilizing the widest range of wavelengths. Because of financial and political problems the CCD was eliminated so that the highest photometric accuracy is only obtainable by comparison with the CFS images. Presently there is a limitation to wavelengths above 2200 A due to the lack of sensitivity of untreated film below 2200 A. Therefore the experiment at present consists of a film camera with 1000 feet of film and 12 filters. The PFI experiments are outlined using only two cameras. Some further problems of the experiment are addressed.
NASA Technical Reports Server (NTRS)
Dillman, R. D.; Eav, B. B.; Baldwin, R. R.
1984-01-01
The Office of Space and Terrestrial Applications-3 payload, scheduled for flight on STS Mission 17, consists of four earth-observation experiments. The Feature Identification and Location Experiment-1 will spectrally sense and numerically classify the earth's surface into water, vegetation, bare earth, and ice/snow/cloud-cover, by means of spectra ratio techniques. The Measurement of Atmospheric Pollution from Satellite experiment will measure CO distribution in the middle and upper troposphere. The Imaging Camera-B uses side-looking SAR to create two-dimensional images of the earth's surface. The Large Format Camera/Attitude Reference System will collect metric quality color, color-IR, and black-and-white photographs for topographic mapping.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
Optical fringe-reflection deflectometry with bundle adjustment
NASA Astrophysics Data System (ADS)
Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng
2018-06-01
Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.
X-ray detectors at the Linac Coherent Light Source.
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt
2015-05-01
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
X-ray detectors at the Linac Coherent Light Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less
X-ray detectors at the Linac Coherent Light Source
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; ...
2015-04-21
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a newmore » generation of cameras under development at SLAC, is introduced.« less
X-ray detectors at the Linac Coherent Light Source
Blaj, Gabriel; Caragiulo, Pietro; Carini, Gabriella; Carron, Sebastian; Dragone, Angelo; Freytag, Dietrich; Haller, Gunther; Hart, Philip; Hasi, Jasmine; Herbst, Ryan; Herrmann, Sven; Kenney, Chris; Markovic, Bojan; Nishimura, Kurtis; Osier, Shawn; Pines, Jack; Reese, Benjamin; Segal, Julie; Tomada, Astrid; Weaver, Matt
2015-01-01
Free-electron lasers (FELs) present new challenges for camera development compared with conventional light sources. At SLAC a variety of technologies are being used to match the demands of the Linac Coherent Light Source (LCLS) and to support a wide range of scientific applications. In this paper an overview of X-ray detector design requirements at FELs is presented and the various cameras in use at SLAC are described for the benefit of users planning experiments or analysts looking at data. Features and operation of the CSPAD camera, which is currently deployed at LCLS, are discussed, and the ePix family, a new generation of cameras under development at SLAC, is introduced. PMID:25931071
Geometrical calibration television measuring systems with solid state photodetectors
NASA Astrophysics Data System (ADS)
Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.
2000-11-01
The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Electronic Still Camera image of Astronaut Claude Nicollier working with RMS
1993-12-05
S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Plenoptic camera based on a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng
2015-09-01
A type of liquid crystal microlens array (LCMLA) with tunable focal length by the voltage signals applied between its top and bottom electrodes, is fabricated and then the common optical focusing characteristics are tested. The relationship between the focal length and the applied voltage signals is given. The LCMLA is integrated with an image sensor and further coupled with a main lens so as to construct a plenoptic camera. Several raw images at different voltage signals applied are acquired and contrasted through the LCMLA-based plenoptic camera constructed by us. Our experiments demonstrate that through utilizing a LCMLA in a plenoptic camera, the focused zone of the LCMLA-based plenoptic camera can be shifted effectively only by changing the voltage signals loaded between the electrodes of the LCMLA, which is equivalent to the extension of the depth of field.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.Micro-Imagers for Spaceborne Cell-Growth Experiments
NASA Technical Reports Server (NTRS)
Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen
2006-01-01
A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.
Web Camera Use of Mothers and Fathers When Viewing Their Hospitalized Neonate.
Rhoads, Sarah J; Green, Angela; Gauss, C Heath; Mitchell, Anita; Pate, Barbara
2015-12-01
Mothers and fathers of neonates hospitalized in a neonatal intensive care unit (NICU) differ in their experiences related to NICU visitation. To describe the frequency and length of maternal and paternal viewing of their hospitalized neonates via a Web camera. A total of 219 mothers and 101 fathers used the Web camera that allows 24/7 NICU viewing from September 1, 2010, to December 31, 2012, which included 40 mother and father dyads. We conducted a review of the Web camera's Web site log-on records in this nonexperimental, descriptive study. Mothers and fathers had a significant difference in the mean number of log-ons to the Web camera system (P = .0293). Fathers virtually visited the NICU less often than mothers, but there was not a statistical difference between mothers and fathers in terms of the mean total number of minutes viewing the neonate (P = .0834) or in the maximum number of minutes of viewing in 1 session (P = .6924). Patterns of visitations over time were not measured. Web camera technology could be a potential intervention to aid fathers in visiting their neonates. Both parents should be offered virtual visits using the Web camera and oriented regarding how to use the Web camera. These findings are important to consider when installing Web cameras in a NICU. Future research should continue to explore Web camera use in NICUs.
Microgravity combustion experiment using high altitude balloon.
NASA Astrophysics Data System (ADS)
Kan, Yuji
In JAXA, microgravity experiment system using a high altitude balloon was developed , for good microgravity environment and short turn-around time. In this publication, I give an account of themicrogravity experiment system and a combustion experiment to utilize the system. The balloon operated vehicle (BOV) as a microgravity experiment system was developed from 2004 to 2009. Features of the BOV are (1) BOV has double capsule structure. Outside-capsule and inside-capsule are kept the non-contact state by 3-axis drag-free control. (2) The payload is spherical shape and itsdiameter is about 300 mm. (3) Keep 10-4 G level microgravity environment for about 30 seconds However, BOV’s payload was small, and could not mount large experiment module. In this study, inherits the results of past, we established a new experimental system called “iBOV” in order toaccommodate larger payload. Features of the iBOV are (1) Drag-free control use for only vertical direction. (2) The payload is a cylindrical shape and its size is about 300 mm in diameter and 700 mm in height. (3) Keep 10-3-10-4 G level microgravity environment for about 30 seconds We have "Observation experiment of flame propagation behavior of the droplets column" as experiment using iBOV. This experiment is a theme that was selected first for technical demonstration of iBOV. We are conducting the flame propagation mechanism elucidation study of fuel droplets array was placed at regular intervals. We conducted a microgravity experiments using TEXUS rocket ESA and drop tower. For this microgravity combustion experiment using high altitude balloon, we use the Engineering Model (EM) for TEXUS rocket experiment. The EM (This payload) consists of combustion vessel, droplets supporter, droplets generator, fuel syringe, igniter, digital camera, high-speed camera. And, This payload was improved from the EM as follows. 1. Add a control unit. 2. Add inside batteries for control unit and heater of combustion vessel. 3. Update of the cameras for the observation. In this experiment, we heat air in the combustion vessel to 500K, before microgravity. And during microgravity, we conduct to the follows. (1) Generate five droplets on the droplets supporter. (2) Moving droplets into combustion vessel. (3) Ignition of an edge droplet of the array using igniter. And during combustion experiment, cameras take movies of combustion phenomena. We plan to conduct this experiment in May 2014.
Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates.
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2018-02-08
In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field.
Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2018-01-01
In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field. PMID:29419782
LPT. Shield test facility test building interior (TAN646). Camera points ...
LPT. Shield test facility test building interior (TAN-646). Camera points down into interior of north pool. Equipment on wall is electronical bus used for post-1970 experiment. Personnel ladder at right. INEEL negative no. HD-40-9-1 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Bringing the Digital Camera to the Physics Lab
ERIC Educational Resources Information Center
Rossi, M.; Gratton, L. M.; Oss, S.
2013-01-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…
ERIC Educational Resources Information Center
Hamadneh, Iyad M.; Al-Masaeed, Aslan
2015-01-01
This study aimed at finding out mathematics teachers' attitudes towards photo math application in solving mathematical problems using mobile camera; it also aim to identify significant differences in their attitudes according to their stage of teaching, educational qualifications, and teaching experience. The study used judgmental/purposive…
Web Camera Use in Developing Biology, Molecular Biology and Biochemistry Laboratories
ERIC Educational Resources Information Center
Ogren, Paul J.; Deibel, Michael; Kelly, Ian; Mulnix, Amy B.; Peck, Charlie
2004-01-01
The use of a network-ready color camera is described which is primarily marketed as a security device and is used for experiments in developmental biology, genetics and biochemistry laboratories and in special student research projects. Acquiring and analyzing project and archiving images is very important in microscopy, electrophoresis and…
Potential for application of an acoustic camera in particle tracking velocimetry.
Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim
2008-11-01
We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.
Lensless imaging for wide field of view
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Yagi, Yasushi
2015-02-01
It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
The spatial resolution of a rotating gamma camera tomographic facility.
Webb, S; Flower, M A; Ott, R J; Leach, M O; Inamdar, R
1983-12-01
An important feature determining the spatial resolution in transverse sections reconstructed by convolution and back-projection is the frequency filter corresponding to the convolution kernel. Equations have been derived giving the theoretical spatial resolution, for a perfect detector and noise-free data, using four filter functions. Experiments have shown that physical constraints will always limit the resolution that can be achieved with a given system. The experiments indicate that the region of the frequency spectrum between KN/2 and KN where KN is the Nyquist frequency does not contribute significantly to resolution. In order to investigate the physical effect of these filter functions, the spatial resolution of reconstructed images obtained with a GE 400T rotating gamma camera has been measured. The results obtained serve as an aid to choosing appropriate reconstruction filters for use with a rotating gamma camera system.
Geometric and Optic Characterization of a Hemispherical Dome Port for Underwater Photogrammetry
Menna, Fabio; Nocerino, Erica; Fassi, Francesco; Remondino, Fabio
2016-01-01
The popularity of automatic photogrammetric techniques has promoted many experiments in underwater scenarios leading to quite impressive visual results, even by non-experts. Despite these achievements, a deep understanding of camera and lens behaviors as well as optical phenomena involved in underwater operations is fundamental to better plan field campaigns and anticipate the achievable results. The paper presents a geometric investigation of a consumer grade underwater camera housing, manufactured by NiMAR and equipped with a 7′′ dome port. After a review of flat and dome ports, the work analyzes, using simulations and real experiments, the main optical phenomena involved when operating a camera underwater. Specific aspects which deal with photogrammetric acquisitions are considered with some tests in laboratory and in a swimming pool. Results and considerations are shown and commented. PMID:26729133
NASA Astrophysics Data System (ADS)
Maddison, R. J.
1985-02-01
The investigation of certain areas of nuclear reactor safety involves the study of high speed phenomena with timescales ranging from microseconds to a few hundreds of milliseconds. Examples which have been extensively studied at Winfrith are firstly, the thermal interaction of molten fuel and reactor coolant which can generate high pressures on the 100 msec timescale, and which involves phenomena such as vapour film collapse which takes place on the microsecond timescale. Secondly, there is the response of reactor structures to such pressures, and finally there is the response of structural materials such as metals and concrete to the impulsive loading arising from the impact of heavy, high velocity missiles. A wide range of experimental techniques is used in these studies, many of which have been developed specially for this type of work which ranges from small laboratory scale to large field scale experiments. There are two important features which characterise many of these experiments:- i) a long period of meticulous preparation of very heavily instrumented, short duration experiments and; ii) the destructive nature of the experiments. Various forms of High Speed photography are included in the inventory of experimental techniques. These include the use of single and double exposure, short duration, spark photography; the use of an Image Convertor Camera (IMACON 790); and a number of rotating prism cine cameras. High Speed Photography is used both in a primary experimental role in the studies, and in a supportive role for other instrumentation. Because of the sometimes violent nature of these experiments, cameras are often heavily protected and operated remotely; lighting systems are sometimes destroyed. This has led to the development of unconventional techniques for camera operation and subject lighting. This paper will describe some of the experiments and the way in which High Speed Photography has been applied as an essential experimental tool. It will be illustrated with cine film taken during the experiments.
Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla
2014-01-01
The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime
2017-07-01
An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.
NASA Astrophysics Data System (ADS)
Mori, Koji; Nishioka, Yusuke; Ohura, Satoshi; Koura, Yoshiaki; Yamauchi, Makoto; Nakajima, Hiroshi; Ueda, Shutaro; Kan, Hiroaki; Anabuki, Naohisa; Nagino, Ryo; Hayashida, Kiyoshi; Tsunemi, Hiroshi; Kohmura, Takayoshi; Ikeda, Shoma; Murakami, Hiroshi; Ozaki, Masanobu; Dotani, Tadayasu; Maeda, Yukie; Sagara, Kenshi
2013-12-01
We report on a proton radiation damage experiment on P-channel CCD newly developed for an X-ray CCD camera onboard the ASTRO-H satellite. The device was exposed up to 109 protons cm-2 at 6.7 MeV. The charge transfer inefficiency (CTI) was measured as a function of radiation dose. In comparison with the CTI currently measured in the CCD camera onboard the Suzaku satellite for 6 years, we confirmed that the new type of P-channel CCD is radiation tolerant enough for space use. We also confirmed that a charge-injection technique and lowering the operating temperature efficiently work to reduce the CTI for our device. A comparison with other P-channel CCD experiments is also discussed. We performed a proton radiation damage experiment on a new P-channel CCD. The device was exposed up to 109 protons cm-2 at 6.7 MeV. We confirmed that it is radiation tolerant enough for space use. We confirmed that a charge-injection technique reduces the CTI. We confirmed that lowering the operating temperature also reduces the CTI.
Effective Replays and Summarization of Virtual Experiences
Ponto, Kevin; Kohlmann, Joe; Gleicher, Michael
2012-01-01
Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods. PMID:22402688
The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment
NASA Astrophysics Data System (ADS)
Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.
2002-12-01
Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!
Daytime Aspect Camera for Balloon Altitudes
NASA Technical Reports Server (NTRS)
Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.
2002-01-01
We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40 km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600- to 1000-nm region of the spectrum, successfully provides daytime aspect information of approx. 10 arcsec resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models used to design the camera, but the daytime stellar magnitude limit was lower than expected due to longitudinal chromatic aberration in the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.
Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi
2016-08-30
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.
An electrically tunable plenoptic camera using a liquid crystal microlens array.
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
An electrically tunable plenoptic camera using a liquid crystal microlens array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074
2015-05-15
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
An electrically tunable plenoptic camera using a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.
2000-01-01
This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.
Wilderness experience in Rocky Mountain National Park 2002: Report to RMNP
Schuster, Elke; Johnson, S. Shea; Taylor, Jonathan G.
2004-01-01
The social science technique of Visitor Employed Photography [VEP] was used to obtain information from visitors about wilderness experiences. Visitors were selected at random from Park-designated wilderness trails, in proportion to their use, and asked to participate in the survey. Respondents were given single-use, 10-exposure cameras and photo-log diaries to record experiences. A total of 293 cameras were distributed, with a response rate of 87%. Following the development of the photos, a copy of the photos, two pertinent pages from the photo-log, and a follow-up survey were mailed to respondents. Fifty six percent of the follow-up surveys were returned. Findings from the two surveys were analyzed and compared.
Study of plant phototropic responses to different LEDs illumination in microgravity
NASA Astrophysics Data System (ADS)
Zyablova, Natalya; Berkovich, Yuliy A.; Skripnikov, Alexander; Nikitin, Vladimir
2012-07-01
The purpose of the experiment planned for Russian BION-M #1, 2012, biosatellite is research of Physcomitrella patens (Hedw.) B.S.G. phototropic responses to different light stimuli in microgravity. The moss was chosen as small-size higher plant. The experimental design involves five lightproof culture flasks with moss gametophores fixed inside the cylindrical container (diameter 120 mm; height 240 mm). The plants in each flask are illuminated laterally by one of the following LEDs: white, blue (475 nm), red (625 nm), far red (730 nm), infrared (950 nm). The gametophores growth and bending are captured periodically by means of five analogue video cameras and recorder. The programmable command module controls power supply of each camera and each light source, commutation of the cameras and functioning of video recorder. Every 20 minutes the recorder is sequentially connecting to one of the cameras. This results in a clip, containing 5 sets of frames in a row. After landing time-lapse films are automatically created. As a result we will have five time-lapse films covering transformations in each of the five culture flasks. Onground experiments demonstrated that white light induced stronger gametophores phototropic bending as compared to red and blue stimuli. The comparison of time-lapse recordings in the experiments will provide useful information to optimize lighting assemblies for space plant growth facilities.
Self-Willed Learning: Experiments in Wild Pedagogy
ERIC Educational Resources Information Center
Jickling, Bob
2015-01-01
This paper is comprised of written text and photographs of wild experiences that relive a series of ontological experiments. The text represents reflections on these experiences. The photographs, artistic expressions of the same experiences, have been made with a homemade pinhole camera--without a lens and viewfinder--thus demanding special…
NASA Technical Reports Server (NTRS)
Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.;
2012-01-01
We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.
NASA Astrophysics Data System (ADS)
Wang, Xiaoyong; Guo, Chongling; Hu, Yongli; He, Hongyan
2017-11-01
The primary and secondary mirrors of onaxis three mirror anastigmatic (TMA) space camera are connected and supported by its front mirror-body structure, which affects both imaging performance and stability of the camera. In this paper, the carbon fiber reinforced plastics (CFRP) thin-walled cylinder and titanium alloy connecting rod have been used for the front mirror-body opto-mechanical structure of the long-focus on-axis and TMA space camera optical system. The front mirror-body component structure has then been optimized by finite element analysis (FEA) computing. Each performance of the front mirror-body structure has been tested by mechanics and vacuum experiments in order to verify the validity of such structure engineering design.
Note: Simple hysteresis parameter inspector for camera module with liquid lens
NASA Astrophysics Data System (ADS)
Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung
2010-05-01
A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-021 (7 Dec 1993) --- This close-up view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members have been working in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-001 (4 Dec 1993) --- This medium close-up view of the top portion of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-07
S61-E-020 (7 Dec 1993) --- This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993, in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Influence of camera parameters on the quality of mobile 3D capture
NASA Astrophysics Data System (ADS)
Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska
2010-01-01
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
Using a High-Speed Camera to Measure the Speed of Sound
ERIC Educational Resources Information Center
Hack, William Nathan; Baird, William H.
2012-01-01
The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…
Still-Video Photography: Tomorrow's Electronic Cameras in the Hands of Today's Photojournalists.
ERIC Educational Resources Information Center
Foss, Kurt; Kahan, Robert S.
This paper examines the still-video camera and its potential impact by looking at recent experiments and by gathering information from some of the few people knowledgeable about the new technology. The paper briefly traces the evolution of the tools and processes of still-video photography, examining how photographers and their work have been…
Bringing the Digital Camera to the Physics Lab
NASA Astrophysics Data System (ADS)
Rossi, M.; Gratton, L. M.; Oss, S.
2013-03-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
Menéndez, Cammie Chaumont; Amandus, Harlan; Damadi, Parisa; Wu, Nan; Konda, Srinivas; Hendricks, Scott
2014-05-01
Driving a taxicab remains one of the most dangerous occupations in the United States, with leading homicide rates. Although safety equipment designed to reduce robberies exists, it is not clear what effect it has on reducing taxicab driver homicides. Taxicab driver homicide crime reports for 1996 through 2010 were collected from 20 of the largest cities (>200,000) in the United States: 7 cities with cameras installed in cabs, 6 cities with partitions installed, and 7 cities with neither cameras nor partitions. Poisson regression modeling using generalized estimating equations provided city taxicab driver homicide rates while accounting for serial correlation and clustering of data within cities. Two separate models were constructed to compare (1) cities with cameras installed in taxicabs versus cities with neither cameras nor partitions and (2) cities with partitions installed in taxicabs versus cities with neither cameras nor partitions. Cities with cameras installed in cabs experienced a significant reduction in homicides after cameras were installed (adjRR = 0.11, CL 0.06-0.24) and compared to cities with neither cameras nor partitions (adjRR = 0.32, CL 0.15-0.67). Cities with partitions installed in taxicabs experienced a reduction in homicides (adjRR = 0.78, CL 0.41-1.47) compared to cities with neither cameras nor partitions, but it was not statistically significant. The findings suggest cameras installed in taxicabs are highly effective in reducing homicides among taxicab drivers. Although not statistically significant, the findings suggest partitions installed in taxicabs may be effective.
Temperature resolution enhancing of commercially available IR camera using computer processing
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-09-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Using such THz camera, one can see a temperature difference on the human skin if this difference is caused by different temperatures inside the body. Because the passive THz camera is very expensive, we try to use the IR camera for observing of such phenomenon. We use a computer code that is available for treatment of the images captured by commercially available IR camera, manufactured by Flir Corp. Using this code we demonstrate clearly changing of human body skin temperature induced by water drinking. Nevertheless, in some cases it is necessary to use additional computer processing to show clearly changing of human body temperature. One of these approaches is developed by us. We believe that we increase ten times (or more) the temperature resolution of such camera. Carried out experiments can be used for solving the counter-terrorism problem and for medicine problems solving. Shown phenomenon is very important for the detection of forbidden objects and substances concealed inside the human body using non-destructive control without X-ray application. Early we have demonstrated such possibility using THz radiation.
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
García-Salgado, Gonzalo; Rebollo, Salvador; Pérez-Camacho, Lorenzo; Martínez-Hesterkamp, Sara; Navarro, Alberto; Fernández-Pereira, José-Manuel
2015-01-01
Diet studies present numerous methodological challenges. We evaluated the usefulness of commercially available trail-cameras for analyzing the diet of Northern Goshawks (Accipiter gentilis) as a model for nesting raptors during the period 2007–2011. We compared diet estimates obtained by direct camera monitoring of 80 nests with four indirect analyses of prey remains collected from the nests and surroundings (pellets, bones, feather-and-hair remains, and feather-hair-and-bone remains combined). In addition, we evaluated the performance of the trail-cameras and whether camera monitoring affected Goshawk behavior. The sensitivity of each diet-analysis method depended on prey size and taxonomic group, with no method providing unbiased estimates for all prey sizes and types. The cameras registered the greatest number of prey items and were probably the least biased method for estimating diet composition. Nevertheless this direct method yielded the largest proportion of prey unidentified to species level, and it underestimated small prey. Our trail-camera system was able to operate without maintenance for longer periods than what has been reported in previous studies with other types of cameras. Initially Goshawks showed distrust toward the cameras but they usually became habituated to its presence within 1–2 days. The habituation period was shorter for breeding pairs that had previous experience with cameras. Using trail-cameras to monitor prey provisioning to nests is an effective tool for studying the diet of nesting raptors. However, the technique is limited by technical failures and difficulties in identifying certain prey types. Our study also shows that cameras can alter adult Goshawk behavior, an aspect that must be controlled to minimize potential negative impacts. PMID:25992956
García-Salgado, Gonzalo; Rebollo, Salvador; Pérez-Camacho, Lorenzo; Martínez-Hesterkamp, Sara; Navarro, Alberto; Fernández-Pereira, José-Manuel
2015-01-01
Diet studies present numerous methodological challenges. We evaluated the usefulness of commercially available trail-cameras for analyzing the diet of Northern Goshawks (Accipiter gentilis) as a model for nesting raptors during the period 2007-2011. We compared diet estimates obtained by direct camera monitoring of 80 nests with four indirect analyses of prey remains collected from the nests and surroundings (pellets, bones, feather-and-hair remains, and feather-hair-and-bone remains combined). In addition, we evaluated the performance of the trail-cameras and whether camera monitoring affected Goshawk behavior. The sensitivity of each diet-analysis method depended on prey size and taxonomic group, with no method providing unbiased estimates for all prey sizes and types. The cameras registered the greatest number of prey items and were probably the least biased method for estimating diet composition. Nevertheless this direct method yielded the largest proportion of prey unidentified to species level, and it underestimated small prey. Our trail-camera system was able to operate without maintenance for longer periods than what has been reported in previous studies with other types of cameras. Initially Goshawks showed distrust toward the cameras but they usually became habituated to its presence within 1-2 days. The habituation period was shorter for breeding pairs that had previous experience with cameras. Using trail-cameras to monitor prey provisioning to nests is an effective tool for studying the diet of nesting raptors. However, the technique is limited by technical failures and difficulties in identifying certain prey types. Our study also shows that cameras can alter adult Goshawk behavior, an aspect that must be controlled to minimize potential negative impacts.
Characterization results from several commercial soft X-ray streak cameras
NASA Astrophysics Data System (ADS)
Stradling, G. L.; Studebaker, J. K.; Cavailler, C.; Launspach, J.; Planes, J.
The spatio-temporal performance of four soft X-ray streak cameras has been characterized. The objective in evaluating the performance capability of these instruments is to enable us to optimize experiment designs, to encourage quantitative analysis of streak data and to educate the ultra high speed photography and photonics community about the X-ray detector performance which is available. These measurements have been made collaboratively over the space of two years at the Forge pulsed X-ray source at Los Alamos and at the Ketjak laser facility an CEA Limeil-Valenton. The X-ray pulse lengths used for these measurements at these facilities were 150 psec and 50 psec respectively. The results are presented as dynamically-measured modulation transfer functions. Limiting temporal resolution values were also calculated. Emphasis is placed upon shot noise statistical limitations in the analysis of the data. Space charge repulsion in the streak tube limits the peak flux at ultra short experiments duration times. This limit results in a reduction of total signal and a decrease in signal to no ise ratio in the streak image. The four cameras perform well with 20 1p/mm resolution discernable in data from the French C650X, the Hadland X-Chron 540 and the Hamamatsu C1936X streak cameras. The Kentech X-ray streak camera has lower modulation and does not resolve below 10 1p/mm but has a longer photocathode.
Tofte, Josef N; Westerlind, Brian O; Martin, Kevin D; Guetschow, Brian L; Uribe-Echevarria, Bastián; Rungprai, Chamnanni; Phisitkul, Phinit
2017-03-01
To validate the knee, shoulder, and virtual Fundamentals of Arthroscopic Training (FAST) modules on a virtual arthroscopy simulator via correlations with arthroscopy case experience and postgraduate year. Orthopaedic residents and faculty from one institution performed a standardized sequence of knee, shoulder, and FAST modules to evaluate baseline arthroscopy skills. Total operation time, camera path length, and composite total score (metric derived from multiple simulator measurements) were compared with case experience and postgraduate level. Values reported are Pearson r; alpha = 0.05. 35 orthopaedic residents (6 per postgraduate year), 2 fellows, and 3 faculty members (2 sports, 1 foot and ankle), including 30 male and 5 female residents, were voluntarily enrolled March to June 2015. Knee: training year correlated significantly with year-averaged knee composite score, r = 0.92, P = .004, 95% confidence interval (CI) = 0.84, 0.96; operation time, r = -0.92, P = .004, 95% CI = -0.96, -0.84; and camera path length, r = -0.97, P = .0004, 95% CI = -0.98, -0.93. Knee arthroscopy case experience correlated significantly with composite score, r = 0.58, P = .0008, 95% CI = 0.27, 0.77; operation time, r = -0.54, P = .002, 95% CI = -0.75, -0.22; and camera path length, r = -0.62, P = .0003, 95% CI = -0.8, -0.33. Shoulder: training year correlated strongly with average shoulder composite score, r = 0.90, P = .006, 95% CI = 0.81, 0.95; operation time, r = -0.94, P = .001, 95% CI = -0.97, -0.89; and camera path length, r = -0.89, P = .007, 95% CI = -0.95, -0.80. Shoulder arthroscopy case experience correlated significantly with average composite score, r = 0.52, P = .003, 95% CI = 0.2, 0.74; strongly with operation time, r = -0.62, P = .0002, 95% CI = -0.8, -0.33; and camera path length, r = -0.37, P = .044, 95% CI = -0.64, -0.01, by training year. FAST: training year correlated significantly with 3 combined FAST activity average composite scores, r = 0.81, P = .0279, 95% CI = 0.65, 0.90; operation times, r = -0.86, P = .012, 95% CI = -0.93, -0.74; and camera path lengths, r = -0.85, P = .015, 95% CI = -0.92, -0.72. Total arthroscopy cases performed did not correlate significantly with overall FAST performance. We found significant correlations between both training year and knee and shoulder arthroscopy experience when compared with performance as measured by composite score, camera path length, and operation time during a simulated diagnostic knee and shoulder arthroscopy, respectively. Three FAST activities demonstrated significant correlations with training year but not arthroscopy case experience as measured by composite score, camera path length, and operation time. We attempt to validate an arthroscopy simulator that could be used to supplement arthroscopy skills training for orthopaedic residents. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment, part 2
NASA Technical Reports Server (NTRS)
Parker, Joey K.
1987-01-01
Computer vision based analysis for the MGM experiment is continued and expanded into new areas. Volumetric strains of granular material triaxial test specimens have been measured from digitized images. A computer-assisted procedure is used to identify the edges of the specimen, and the edges are used in a 3-D model to estimate specimen volume. The results of this technique compare favorably to conventional measurements. A simplified model of the magnification caused by diffraction of light within the water of the test apparatus was also developed. This model yields good results when the distance between the camera and the test specimen is large compared to the specimen height. An algorithm for a more accurate 3-D magnification correction is also presented. The use of composite and RGB (red-green-blue) color cameras is discussed and potentially significant benefits from using an RGB camera are presented.
The spacecraft control laboratory experiment optical attitude measurement system
NASA Technical Reports Server (NTRS)
Welch, Sharon S.; Montgomery, Raymond C.; Barsky, Michael F.
1991-01-01
A stereo camera tracking system was developed to provide a near real-time measure of the position and attitude of the Spacecraft COntrol Laboratory Experiment (SCOLE). The SCOLE is a mockup of the shuttle-like vehicle with an attached flexible mast and (simulated) antenna, and was designed to provide a laboratory environment for the verification and testing of control laws for large flexible spacecraft. Actuators and sensors located on the shuttle and antenna sense the states of the spacecraft and allow the position and attitude to be controlled. The stereo camera tracking system which was developed consists of two position sensitive detector cameras which sense the locations of small infrared LEDs attached to the surface of the shuttle. Information on shuttle position and attitude is provided in six degrees-of-freedom. The design of this optical system, calibration, and tracking algorithm are described. The performance of the system is evaluated for yaw only.
Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment
NASA Astrophysics Data System (ADS)
Weed, Jonathan Robert
The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.
Automated camera-phone experience with the frequency of imaging necessary to capture diet.
Arab, Lenore; Winter, Ashley
2010-08-01
Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
A novel simultaneous streak and framing camera without principle errors
NASA Astrophysics Data System (ADS)
Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.
2018-02-01
A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
In-Space Structural Validation Plan for a Stretched-Lens Solar Array Flight Experiment
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woods-Vedeler, Jessica A.; Jones, Thomas W.
2001-01-01
This paper summarizes in-space structural validation plans for a proposed Space Shuttle-based flight experiment. The test article is an innovative, lightweight solar array concept that uses pop-up, refractive stretched-lens concentrators to achieve a power/mass density of at least 175 W/kg, which is more than three times greater than current capabilities. The flight experiment will validate this new technology to retire the risk associated with its first use in space. The experiment includes structural diagnostic instrumentation to measure the deployment dynamics, static shape, and modes of vibration of the 8-meter-long solar array and several of its lenses. These data will be obtained by photogrammetry using the Shuttle payload-bay video cameras and miniature video cameras on the array. Six accelerometers are also included in the experiment to measure base excitations and small-amplitude tip motions.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
Demonstration of in-vivo Multi-Probe Tracker Based on a Si/CdTe Semiconductor Compton Camera
NASA Astrophysics Data System (ADS)
Takeda, Shin'ichiro; Odaka, Hirokazu; Ishikawa, Shin-nosuke; Watanabe, Shin; Aono, Hiroyuki; Takahashi, Tadayuki; Kanayama, Yousuke; Hiromura, Makoto; Enomoto, Shuichi
2012-02-01
By using a prototype Compton camera consisting of silicon (Si) and cadmium telluride (CdTe) semiconductor detectors, originally developed for the ASTRO-H satellite mission, an experiment involving imaging multiple radiopharmaceuticals injected into a living mouse was conducted to study its feasibility for medical imaging. The accumulation of both iodinated (131I) methylnorcholestenol and 85Sr into the mouse's organs was simultaneously imaged by the prototype. This result implies that the Compton camera is expected to become a multi-probe tracker available in nuclear medicine and small animal imaging.
A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera
NASA Astrophysics Data System (ADS)
Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin
2014-12-01
The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.
Watching elderly and disabled person's physical condition by remotely controlled monorail robot
NASA Astrophysics Data System (ADS)
Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru
2001-10-01
We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.
Development of the radial neutron camera system for the HL-2A tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi
2016-06-15
A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less
Accuracy evaluation of optical distortion calibration by digital image correlation
NASA Astrophysics Data System (ADS)
Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan
2017-11-01
Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.
ETRCF, TRA654, INTERIOR. CAMERA IS ON MAIN FLOOR. NOTE CRANE ...
ETR-CF, TRA-654, INTERIOR. CAMERA IS ON MAIN FLOOR. NOTE CRANE HOOKS. ELECTRICAL EQUIPMENT IS PART OF PAST EXPERIMENT. DOOR AT LEFT EDGE OF VIEW LEADS TO REACTOR SERVICE BUILDING, TRA-635. INL NEGATIVE NO. HD24-1-2. Mike Crane, Photographer, ca. 2003 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
NGEE Arctic Webcam Photographs, Barrow Environmental Observatory, Barrow, Alaska
Bob Busey; Larry Hinzman
2012-04-01
The NGEE Arctic Webcam (PTZ Camera) captures two views of seasonal transitions from its generally south-facing position on a tower located at the Barrow Environmental Observatory near Barrow, Alaska. Images are captured every 30 minutes. Historical images are available for download. The camera is operated by the U.S. DOE sponsored Next Generation Ecosystem Experiments - Arctic (NGEE Arctic) project.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Hoshi, Akira; Aoki, Yuta; Nakano, Kazuya; Niizeki, Kyuichi; Aizu, Yoshihisa
2016-03-01
A non-contact imaging method with a digital RGB camera is proposed to evaluate plethysmogram and spontaneous lowfrequency oscillation. In vivo experiments with human skin during mental stress induced by the Stroop color-word test demonstrated the feasibility of the method to evaluate the activities of autonomic nervous systems.
Astronauts Cooper and Conrad prepare cameras during visual acuity tests
NASA Technical Reports Server (NTRS)
1965-01-01
Astronauts L. Gordon Cooper Jr. (left), command pilot, and Charles Conrad Jr., pilot, the prime crew of the Gemini 5 space flight, prepare their cameras while aboard a C-130 aircraft flying near Laredo. The two astronauts are taking part in a series of visual acuity experiments to aid them in learning to identify known terrestrial features under controlled conditions.
NASA Astrophysics Data System (ADS)
Dayton, M.; Datte, P.; Carpenter, A.; Eckart, M.; Manuel, A.; Khater, H.; Hargrove, D.; Bell, P.
2017-08-01
The National Ignition Facility's (NIF) harsh radiation environment can cause electronics to malfunction during high-yield DT shots. Until now there has been little experience fielding electronic-based cameras in the target chamber under these conditions; hence, the performance of electronic components in NIF's radiation environment was unknown. It is possible to purchase radiation tolerant devices, however, they are usually qualified for radiation environments different to NIF, such as space flight or nuclear reactors. This paper presents the results from a series of online experiments that used two different prototype camera systems built from non-radiation hardened components and one commercially available camera that permanently failed at relatively low total integrated dose. The custom design built in Livermore endured a 5 × 1015 neutron shot without upset, while the other custom design upset at 2 × 1014 neutrons. These results agreed with offline testing done with a flash x-ray source and a 14 MeV neutron source, which suggested a methodology for developing and qualifying electronic systems for NIF. Further work will likely lead to the use of embedded electronic systems in the target chamber during high-yield shots.
Processing the Viking lander camera data
NASA Technical Reports Server (NTRS)
Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.
1977-01-01
Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.
A goggle navigation system for cancer resection surgery
NASA Astrophysics Data System (ADS)
Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald
2014-02-01
We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.
A method of camera calibration with adaptive thresholding
NASA Astrophysics Data System (ADS)
Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei
2009-07-01
In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.
Astronauts Thornton & Akers on HST photographed by Electronic Still Camera
1993-12-05
S61-E-012 (5 Dec 1993) --- This view of astronauts Kathryn C. Thornton (top) and Thomas D. Akers working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is teaming with Akers to install the +V2 Solar Array Panel as a replacement for the original one removed earlier. Akers uses tethers and a foot restraint to remain in position for the task. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-010 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Astronauts Thornton & Akers on HST photographed by Electronic Still Camera
1993-12-05
S61-E-014 (5 Dec 1993) --- This view of astronauts Kathryn C. Thornton (bottom) and Thomas D. Akers working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is teaming with Akers to install the +V2 Solar Array Panel as a replacement for the original one removed earlier. Akers uses tethers and a foot restraint to remain in position for the task. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-005 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-004 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Time-resolved spectra of dense plasma focus using spectrometer, streak camera, and CCD combination.
Goldin, F J; Meehan, B T; Hagen, E C; Wilkins, P R
2010-10-01
A time-resolving spectrographic instrument has been assembled with the primary components of a spectrometer, image-converting streak camera, and CCD recording camera, for the primary purpose of diagnosing highly dynamic plasmas. A collection lens defines the sampled region and couples light from the plasma into a step index, multimode fiber which leads to the spectrometer. The output spectrum is focused onto the photocathode of the streak camera, the output of which is proximity-coupled to the CCD. The spectrometer configuration is essentially Czerny-Turner, but off-the-shelf Nikon refraction lenses, rather than mirrors, are used for practicality and flexibility. Only recently assembled, the instrument requires significant refinement, but has now taken data on both bridge wire and dense plasma focus experiments.
High-definition television evaluation for remote handling task performance
NASA Astrophysics Data System (ADS)
Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J. V.; Herndon, J. N.
Described are experiments designed to evaluate the impact of HDTV (High-Definition Television) on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore spectacles with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.
Comparison of parameters of modern cooled and uncooled thermal cameras
NASA Astrophysics Data System (ADS)
Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał
2017-10-01
During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2016-09-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. Three years ago, we have demonstrated principal possibility to see a temperature trace, induced by food eating or water drinking, on the human body skin by using a passive THz camera. However, this camera is very expensive. Therefore, for practice it will be very convenient if one can use the IR camera for this purpose. In contrast to passive THz camera using, the IR camera does not allow to see the object under clothing, if an image, produced by this camera, is used directly. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To overcome this disadvantage we develop novel approach for computer processing of IR camera images. It allows us to increase a temperature resolution of IR camera as well as increasing of human year effective susceptibility. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by T-shirt. Shown results are very important for the detection of forbidden objects, cancelled inside the human body, by using non-destructive control without using X-rays.
A telephoto camera system with shooting direction control by gaze detection
NASA Astrophysics Data System (ADS)
Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro
2015-05-01
For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.
Electronic cameras for low-light microscopy.
Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith
2013-01-01
This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2018-05-01
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Overt vs. covert speed cameras in combination with delayed vs. immediate feedback to the offender.
Marciano, Hadas; Setter, Pe'erly; Norman, Joel
2015-06-01
Speeding is a major problem in road safety because it increases both the probability of accidents and the severity of injuries if an accident occurs. Speed cameras are one of the most common speed enforcement tools. Most of the speed cameras around the world are overt, but there is evidence that this can cause a "kangaroo effect" in driving patterns. One suggested alternative to prevent this kangaroo effect is the use of covert cameras. Another issue relevant to the effect of enforcement countermeasures on speeding is the timing of the fine. There is general agreement on the importance of the immediacy of the punishment, however, in the context of speed limit enforcement, implementing such immediate punishment is difficult. An immediate feedback that mediates the delay between the speed violation and getting a ticket is one possible solution. This study examines combinations of concealment and the timing of the fine in operating speed cameras in order to evaluate the most effective one in terms of enforcing speed limits. Using a driving simulator, the driving performance of the following four experimental groups was tested: (1) overt cameras with delayed feedback, (2) overt cameras with immediate feedback, (3) covert cameras with delayed feedback, and (4) covert cameras with immediate feedback. Each of the 58 participants drove in the same scenario on three different days. The results showed that both median speed and speed variance were higher with overt than with covert cameras. Moreover, implementing a covert camera system along with immediate feedback was more conducive to drivers maintaining steady speeds at the permitted levels from the very beginning. Finally, both 'overt cameras' groups exhibit a kangaroo effect throughout the entire experiment. It can be concluded that an implementation strategy consisting of covert speed cameras combined with immediate feedback to the offender is potentially an optimal way to motivate drivers to maintain speeds at the speed limit. Copyright © 2015 Elsevier Ltd. All rights reserved.
A didactic experiment showing the Compton scattering by means of a clinical gamma camera.
Amato, Ernesto; Auditore, Lucrezia; Campennì, Alfredo; Minutoli, Fabio; Cucinotta, Mariapaola; Sindoni, Alessandro; Baldari, Sergio
2017-06-01
We describe a didactic approach aimed to explain the effect of Compton scattering in nuclear medicine imaging, exploiting the comparison of a didactic experiment with a gamma camera with the outcomes from a Monte Carlo simulation of the same experimental apparatus. We employed a 99m Tc source emitting 140.5keV photons, collimated in the upper direction through two pinholes, shielded by 6mm of lead. An aluminium cylinder was placed on the source at 50mm of distance. The energy of the scattered photons was measured on the spectra acquired by the gamma camera. We observed that the gamma ray energy measured at each step of rotation gradually decreased from the characteristic energy of 140.5keV at 0° to 102.5keV at 120°. A comparison between the obtained data and the expected results from the Compton formula and from the Monte Carlo simulation revealed a full agreement within the experimental error (relative errors between -0.56% and 1.19%), given by the energy resolution of the gamma camera. Also the electron rest mass has been evaluated satisfactorily. The experiment was found useful in explaining nuclear medicine residents the phenomenology of the Compton scattering and its importance in the nuclear medicine imaging, and it can be profitably proposed during the training of medical physics residents as well. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
International Space Station: Expedition 2000
NASA Technical Reports Server (NTRS)
2000-01-01
Live footage of the International Space Station (ISS) presents an inside look at the groundwork and assembly of the ISS. Footage includes both animation and live shots of a Space Shuttle liftoff. Phil West, Engineer; Dr. Catherine Clark, Chief Scientist ISS; and Joe Edwards, Astronaut, narrate the video. The first topic of discussion is People and Communications. Good communication is a key component in our ISS endeavor. Dr. Catherine Clark uses two soup cans attached by a string to demonstrate communication. Bill Nye the Science Guy talks briefly about science aboard the ISS. Charlie Spencer, Manager of Space Station Simulators, talks about communication aboard the ISS. The second topic of discussion is Engineering. Bonnie Dunbar, Astronaut at Johnson Space Flight Center, gives a tour of the Japanese Experiment Module (JEM). She takes us inside Node 2 and the U.S. Lab Destiny. She also shows where protein crystal growth experiments are performed. Audio terminal units are used for communication in the JEM. A demonstration of solar arrays and how they are tested is shown. Alan Bell, Project Manager MRMDF (Mobile Remote Manipulator Development Facility), describes the robot arm that is used on the ISS and how it maneuvers the Space Station. The third topic of discussion is Science and Technology. Dr. Catherine Clark, using a balloon attached to a weight, drops the apparatus to the ground to demonstrate Microgravity. The bursting of the balloon is observed. Sherri Dunnette, Imaging Technologist, describes the various cameras that are used in space. The types of still cameras used are: 1) 35 mm, 2) medium format cameras, 3) large format cameras, 4) video cameras, and 5) the DV camera. Kumar Krishen, Chief Technologist ISS, explains inframetrics, infrared vision cameras and how they perform. The Short Arm Centrifuge is shown by Dr. Millard Reske, Senior Life Scientist, to subject astronauts to forces greater than 1-g. Reske is interested in the physiological effects of the eyes and the muscular system after their exposure to forces greater than 1-g.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
Localization and Mapping Using a Non-Central Catadioptric Camera System
NASA Astrophysics Data System (ADS)
Khurana, M.; Armenakis, C.
2018-05-01
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
ERIC Educational Resources Information Center
Physics Education, 1985
1985-01-01
Describes: (1) two experiments using a laser (resonant cavity for light and pinhole camera effect with a hologram); (2) optical differaction patterns displayed by microcomputer; and (3) automating the Hall effect (with comments on apparatus needed and computer program used); and (4) an elegant experiment in mechanical equilibrium. (JN)
STS-31 MS Sullivan and Pilot Bolden monitor SE 82-16 Ion Arc on OV-103 middeck
NASA Technical Reports Server (NTRS)
1990-01-01
STS-31 Mission Specialist (MS) Kathryn D. Sullivan monitors and advises ground controllers of the activity inside the Student Experiment (SE) 82-16, Ion arc - studies of the effects of microgravity and a magnetic field on an electric arc, mounted in front of the middeck lockers aboard Discovery, Orbiter Vehicle (OV) 103. Pilot Charles F. Bolden uses a video camera and an ARRIFLEX motion picture camera to record the activity inside the special chamber. A sign in front of the experiment reads 'SSIP 82-16 Greg's Experiment Happy Graduation from STS-31.' SSIP stands for Shuttle Student Involvement Program. Gregory S. Peterson who developed the experiment (Greg's Experiment) is a student at Utah State University and monitored the experiment's operation from JSC's Mission Control Center (MCC) during the flight. Decals displayed in the background on the orbiter galley represent the Hubble Space Telescope (HST), the United States (U.S.) Naval Reserve, Navy Oceanographers, U.S. Navy, and Univer
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie
2010-10-10
The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.
Li, Jin; Liu, Zilong; Liu, Si
2017-02-20
In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.
Preface: The Chang'e-3 lander and rover mission to the Moon
NASA Astrophysics Data System (ADS)
Ip, Wing-Huen; Yan, Jun; Li, Chun-Lai; Ouyang, Zi-Yuan
2014-12-01
The Chang'e-3 (CE-3) lander and rover mission to the Moon was an intermediate step in China's lunar exploration program, which will be followed by a sample return mission. The lander was equipped with a number of remote-sensing instruments including a pair of cameras (Landing Camera and Terrain Camera) for recording the landing process and surveying terrain, an extreme ultraviolet camera for monitoring activities in the Earth's plasmasphere, and a first-ever Moon-based ultraviolet telescope for astronomical observations. The Yutu rover successfully carried out close-up observations with the Panoramic Camera, mineralogical investigations with the VIS-NIR Imaging Spectrometer, study of elemental abundances with the Active Particle-induced X-ray Spectrometer, and pioneering measurements of the lunar subsurface with Lunar Penetrating Radar. This special issue provides a collection of key information on the instrumental designs, calibration methods and data processing procedures used by these experiments with a perspective of facilitating further analyses of scientific data from CE-3 in preparation for future missions.
ePix100 camera: Use and applications at LCLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.
2016-07-27
The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopymore » (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.« less
ERIC Educational Resources Information Center
Cook, Tina; Hess, Else
2007-01-01
This article draws on the experience of three research projects where photography was used with children as a data collection method and presentation tool. It was used as a way of trying to enhance opportunities for adults to hear about topics from the perspective of children. The projects were not designed to investigate the use of cameras as a…
Remote Attitude Measurement Techniques.
1982-12-01
televison camera). The incident illumination produces a non-uniformity on the scanned side of the sensitive material which can be modeled as an...to compute the probabilistic attitude matrix. Fourth, the experiment will be conducted with the televison camera mounted on a machinists table, such... the optical axis does not necesarily pass through the center of the lens assembly and impact the center pixel in the active region of
PNIC - A near infrared camera for testing focal plane arrays
NASA Astrophysics Data System (ADS)
Hereld, Mark; Harper, D. A.; Pernic, R. J.; Rauscher, Bernard J.
1990-07-01
This paper describes the design and the performance of the Astrophysical Research Consortium prototype near-infrared camera (pNIC) designed to test focal plane arrays both on and off the telescope. Special attention is given to the detector in pNIC, the mechanical and optical designs, the electronics, and the instrument interface. Experiments performed to illustrate the most salient aspects of pNIC are described.
ERIC Educational Resources Information Center
Green, Carie
2016-01-01
This article explores the use of wearable cameras with children as a data collection means to engage young children as active researchers in recording their experiences in natural environments. This method captures children's unique perspectives of being-in-the-world, depicting what they see, hear, say, touch, and their interactions with others.…
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
a Cost-Effective Method for Crack Detection and Measurement on Concrete Surface
NASA Astrophysics Data System (ADS)
Sarker, M. M.; Ali, T. A.; Abdelfatah, A.; Yehia, S.; Elaksher, A.
2017-11-01
Crack detection and measurement in the surface of concrete structures is currently carried out manually or through Non-Destructive Testing (NDT) such as imaging or scanning. The recent developments in depth (stereo) cameras have presented an opportunity for cost-effective, reliable crack detection and measurement. This study aimed at evaluating the feasibility of the new inexpensive depth camera (ZED) for crack detection and measurement. This depth camera with its lightweight and portable nature produces a 3D data file of the imaged surface. The ZED camera was utilized to image a concrete surface and the 3D file was processed to detect and analyse cracks. This article describes the outcome of the experiment carried out with the ZED camera as well as the processing tools used for crack detection and analysis. Crack properties that were also of interest were length, orientation, and width. The use of the ZED camera allowed for distinction between surface and concrete cracks. The ZED high-resolution capability and point cloud capture technology helped in generating a dense 3D data in low-lighting conditions. The results showed the ability of the ZED camera to capture the crack depth changes between surface (render) cracks, and crack that form in the concrete itself.
Project Physics Handbook 4, Light and Electromagnetism.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Seven experiments and 40 activities are presented in this handbook. The experiments are related to Young's experiment, electric forces, forces on currents, electron-beam tubes, and wave modulation and communication. The activities are primarily concerned with aspects of scattered and polarized light, colors, image formation, lenses, cameras,…
Characterization of a thinned back illuminated MIMOSA V sensor as a visible light camera
NASA Astrophysics Data System (ADS)
Bulgheroni, Antonio; Bianda, Michele; Caccia, Massimo; Cappellini, Chiara; Mozzanica, Aldo; Ramelli, Renzo; Risigo, Fabio
2006-09-01
This paper reports the measurements that have been performed both in the Silicon Detector Laboratory at the University of Insubria (Como, Italy) and at the Instituto Ricerche SOlari Locarno (IRSOL) to characterize a CMOS pixel particle detector as a visible light camera. The CMOS sensor has been studied in terms of Quantum Efficiency in the visible spectrum, image blooming and reset inefficiency in saturation condition. The main goal of these measurements is to prove that this kind of particle detector can also be used as an ultra fast, 100% fill factor visible light camera in solar physics experiments.
Low, slow, small target recognition based on spatial vision network
NASA Astrophysics Data System (ADS)
Cheng, Zhao; Guo, Pei; Qi, Xin
2018-03-01
Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.
A new compact, high sensitivity neutron imaging systema)
NASA Astrophysics Data System (ADS)
Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.
2012-10-01
We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.
NASA Technical Reports Server (NTRS)
Staguhn, Johannes G.; Benford, Dominic J.; Dwek, Eli; Hilton, Gene; Fixsen, Dale J.; Irwin, Kent; Jhabvala, Christine; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.;
2014-01-01
We present the main design features for the GISMO-2 bolometer camera, which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISMO-2 will operate simultaneously in the 1 and 2 mm atmospherical windows. The 1 mm channel uses a 32 × 40 TES-based backshort under grid (BUG) bolometer array, the 2 mm channel operates with a 16 × 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISMO-2 was strongly influenced by our experience with the GISMO 2mm bolometer camera, which is successfully operating at the 30 m telescope. GISMO is accessible to the astronomical community through the regularIRAMcall for proposals.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-009 (4 Dec 1993) --- This view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC). The scene was down linked to ground controllers soon after the Space Shuttle Endeavour caught up to the orbiting telescope 320 miles above Earth. Shown here before grapple, the HST was captured on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven STS-61 crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-04
S61-E-002 (4 Dec 1993) --- This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed from inside Endeavour's cabin with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view features the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-04
S61-E-003 (4 Dec 1993) --- This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
SWIR, VIS and LWIR observer performance against handheld objects: a comparison
NASA Astrophysics Data System (ADS)
Adomeit, Uwe
2016-10-01
The short wave infrared spectral range caused interest to be used in day and night time military and security applications in the last years. This necessitates performance assessment of SWIR imaging equipment in comparison to the one operating in the visual (VIS) and thermal infrared (LWIR) spectral range. In the military context (nominal) range is the main performance criteria. Discriminating friend from foe is one of the main tasks in today's asymmetric scenarios and so personnel, human activities and handheld objects are used as targets to estimate ranges. The later was also used for an experiment at Fraunhofer IOSB to get a first impression how the SWIR performs compared to VIS and LWIR. A human consecutively carrying one of nine different civil or military objects was recorded from five different ranges in the three spectral ranges. For the visual spectral range a 3-chip color-camera was used, the SWIR range was covered by an InGaAs-camera and the LWIR by an uncooled bolometer. It was ascertained that the nominal spatial resolution of the three cameras was in the same magnitude in order to enable an unbiased assessment. Daytime conditions were selected for data acquisition to separate the observer performance from illumination conditions and to some extend also camera performance. From the recorded data, a perception experiment was prepared. It was conducted as a nine-alternative forced choice, unlimited observation time test with 15 observers participating. Before the experiment, the observers were trained on close range target data. Outcome of the experiment was the average probability of identification versus range between camera and target. The comparison of the range performance achieved in the three spectral bands gave a mixed result. On one hand a ranking VIS / SWIR / LWIR in decreasing order can be seen in the data, but on the other hand only the difference between VIS and the other bands is statistically significant. Additionally it was not possible to explain the outcome with typical contrast metrics. Probably form is more important than contrast here as long as the contrast is generally high enough. These results were unexpected and need further exploration.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
NASA Technical Reports Server (NTRS)
Voellmer, George M.; Jackson, Michael L.; Shirron, Peter J.; Tuttle, James G.
2002-01-01
The High Resolution Airborne Wideband Camera (HAWC) and the Submillimeter And Far Infrared Experiment (SAFIRE) will use identical Adiabatic Demagnetization Refrigerators (ADR) to cool their detectors to 200mK and 100mK, respectively. In order to minimize thermal loads on the salt pill, a Kevlar suspension system is used to hold it in place. An innovative, kinematic suspension system is presented. The suspension system is unique in that it consists of two parts that can be assembled and tensioned offline, and later bolted onto the salt pill.
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
X-rays only when you want them: optimized pump–probe experiments using pseudo-single-bunch operation
Hertlein, M. P.; Scholl, A.; Cordones, A. A.; Lee, J. H.; Engelhorn, K.; Glover, T. E.; Barbrel, B.; Sun, C.; Steier, C.; Portmann, G.; Robin, D. S.
2015-01-01
Laser pump–X-ray probe experiments require control over the X-ray pulse pattern and timing. Here, the first use of pseudo-single-bunch mode at the Advanced Light Source in picosecond time-resolved X-ray absorption experiments on solutions and solids is reported. In this mode the X-ray repetition rate is fully adjustable from single shot to 500 kHz, allowing it to be matched to typical laser excitation pulse rates. Suppressing undesired X-ray pulses considerably reduces detector noise and improves signal to noise in time-resolved experiments. In addition, dose-induced sample damage is considerably reduced, easing experimental setup and allowing the investigation of less robust samples. Single-shot X-ray exposures of a streak camera detector using a conventional non-gated charge-coupled device (CCD) camera are also demonstrated. PMID:25931090
X-rays only when you want them: Optimized pump–probe experiments using pseudo-single-bunch operation
Hertlein, M. P.; Scholl, A.; Cordones, A. A.; ...
2015-04-02
Laser pump–X-ray probe experiments require control over the X-ray pulse pattern and timing. Here, the first use of pseudo-single-bunch mode at the Advanced Light Source in picosecond time-resolved X-ray absorption experiments on solutions and solids is reported. In this mode the X-ray repetition rate is fully adjustable from single shot to 500 kHz, allowing it to be matched to typical laser excitation pulse rates. Suppressing undesired X-ray pulses considerably reduces detector noise and improves signal to noise in time-resolved experiments. In addition, dose-induced sample damage is considerably reduced, easing experimental setup and allowing the investigation of less robust samples. Single-shotmore » X-ray exposures of a streak camera detector using a conventional non-gated charge-coupled device (CCD) camera are also demonstrated.« less
Schlossberg, David J.; Bodner, Grant M.; Bongard, Michael W.; ...
2016-09-16
Here, a novel, cost-effective, multi-point Thomson scattering system has been designed, implemented, and operated on the Pegasus Toroidal Experiment. Leveraging advances in Nd:YAG lasers, high-efficiency volume phase holographic transmission gratings, and increased quantum-efficiency Generation 3 image-intensified charge coupled device (ICCD) cameras, the system provides Thomson spectra at eight spatial locations for a single grating/camera pair. The on-board digitization of the ICCD camera enables easy modular expansion, evidenced by recent extension from 4 to 12 plasma/background spatial location pairs. Stray light is rejected using time-of-flight methods suited to gated ICCDs, and background light is blocked during detector readout by a fastmore » shutter. This –10 3 reduction in background light enables further expansion to up to 24 spatial locations. The implementation now provides single-shot T e(R) for n e > 5 × 10 18 m –3.« less
REVIEW OF DEVELOPMENTS IN SPACE REMOTE SENSING FOR MONITORING RESOURCES.
Watkins, Allen H.; Lauer, D.T.; Bailey, G.B.; Moore, D.G.; Rohde, W.G.
1984-01-01
Space remote sensing systems are compared for suitability in assessing and monitoring the Earth's renewable resources. Systems reviewed include the Landsat Thematic Mapper (TM), the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR), the French Systeme Probatoire d'Observation de la Terre (SPOT), the German Shuttle Pallet Satellite (SPAS) Modular Optoelectronic Multispectral Scanner (MOMS), the European Space Agency (ESA) Spacelab Metric Camera, the National Aeronautics and Space Administration (NASA) Large Format Camera (LFC) and Shuttle Imaging Radar (SIR-A and -B), the Russian Meteor satellite BIK-E and fragment experiments and MKF-6M and KATE-140 camera systems, the ESA Earth Resources Satellite (ERS-1), the Japanese Marine Observation Satellite (MOS-1) and Earth Resources Satellite (JERS-1), the Canadian Radarsat, the Indian Resources Satellite (IRS), and systems proposed or planned by China, Brazil, Indonesia, and others. Also reviewed are the concepts for a 6-channel Shuttle Imaging Spectroradiometer, a 128-channel Shuttle Imaging Spectrometer Experiment (SISEX), and the U. S. Mapsat.
Measuring the spatial resolution of an optical system in an undergraduate optics laboratory
NASA Astrophysics Data System (ADS)
Leung, Calvin; Donnelly, T. D.
2017-06-01
Two methods of quantifying the spatial resolution of a camera are described, performed, and compared, with the objective of designing an imaging-system experiment for students in an undergraduate optics laboratory. With the goal of characterizing the resolution of a typical digital single-lens reflex (DSLR) camera, we motivate, introduce, and show agreement between traditional test-target contrast measurements and the technique of using Fourier analysis to obtain the modulation transfer function (MTF). The advantages and drawbacks of each method are compared. Finally, we explore the rich optical physics at work in the camera system by calculating the MTF as a function of wavelength and f-number. For example, we find that the Canon 40D demonstrates better spatial resolution at short wavelengths, in accordance with scalar diffraction theory, but is not diffraction-limited, being significantly affected by spherical aberration. The experiment and data analysis routines described here can be built and written in an undergraduate optics lab setting.
Development of Automated Tracking System with Active Cameras for Figure Skating
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
A calibration method based on virtual large planar target for cameras with large FOV
NASA Astrophysics Data System (ADS)
Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu
2018-02-01
In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.
Robust estimation of simulated urinary volume from camera images under bathroom illumination.
Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji
2016-08-01
General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.
NASA Astrophysics Data System (ADS)
Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.
2012-05-01
Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.
ERIC Educational Resources Information Center
South Carolina Univ., Columbia. Dept. of Physics.
This book contains 65 physics experiments. The experiments are for a college-level physics course for music and art majors. The initial experiments are devoted to the general concept of vibration and cover vibrating strings, air columns, reflection, and interference. Later experiments explore light, color perception, cameras, mirrors and symmetry,…
Low-Cost Accelerometers for Physics Experiments
ERIC Educational Resources Information Center
Vannoni, Maurizio; Straulino, Samuele
2007-01-01
The implementation of a modern game-console controller as a data acquisition interface for physics experiments is discussed. The investigated controller is equipped with three perpendicular accelerometers and a built-in infrared camera to evaluate its own relative position. A pendulum experiment is realized as a demonstration of the proposed…
NASA Astrophysics Data System (ADS)
Ishibashi, K.; Shirai, K.; Ogawa, K.; Wada, K.; Honda, R.; Arakawa, M.; Sakatani, N.; Ikeda, Y.
2017-07-01
Deployable Camera 3-D (DCAM3-D) is a small high-resolution camera equipped on Deployable Camera 3 (DCAM3), one of the Hayabusa2 instruments. Hayabusa2 will explore asteroid 162137 Ryugu (1999 JU3) and conduct an impact experiment using a liner shooting device called Small Carry-on Impactor (SCI). DCAM3 will be detached from the Hayabusa2 spacecraft and observe the impact experiment. The purposes of the observation are to know the impact conditions, to estimate the surface structure of asteroid Ryugu, and to understand the physics of impact phenomena on low-gravity bodies. DCAM3-D requires high imaging performance because it has to image and detect multiple targets of different scale and radiance, i.e., the faint SCI before the shot from 1-km distance, the bright ejecta generated by the impact, and the asteroid. In this paper we report the evaluation of the performance of the CMOS imaging sensor and the optical system of DCAM3-D. We also describe the calibration of DCAM3-D. We confirmed that the imaging performance of DCAM3-D satisfies the required values to achieve the purposes of the observation.
Diagnostics for Z-pinch implosion experiments on PTS
NASA Astrophysics Data System (ADS)
Ren, X. D.; Huang, X. B.; Zhou, S. T.; Zhang, S. Q.; Dan, J. K.; Li, J.; Cai, H. C.; Wang, K. L.; Ouyang, K.; Xu, Q.; Duan, S. C.; Chen, G. H.; Wang, M.; Feng, S. P.; Yang, L. B.; Xie, W. P.; Deng, J. J.
2014-12-01
The preliminary experiments of wire array implosion were performed on PTS, a 10 MA z-pinch driver with a 70 ns rise time. A set of diagnostics have been developed and fielded on PTS to study pinch physics and implosion dynamics of wire array. Radiated power measurement for soft x-rays was performed by multichannel filtered x-ray diode array, and flat spectral responses x-ray diode detector. Total x-ray yield was measured by a calibrated, unfiltered nickel bolometer which was also used to obtain pinch power. Multiple time-gated pinhole cameras were used to produce spatial-resolved images of x-ray self-emission from plasmas. Two time-integrated pinhole cameras were used respectively with 20-μm Be filter and with multilayer mirrors to record images produced by >1-keV and 277±5 eV self-emission. An optical streak camera was used to produce radial implosion trajectories, and an x-ray streak camera paired with a horizontal slit was used to record a continuous time-history of emission with one-dimensional spatial resolution. A frequency-doubled Nd:YAG laser (532 nm) was used to produce four frame laser shadowgraph images with 6 ns time interval. We will briefly describe each of these diagnostics and present some typical results from them.
Deployable Camera (DCAM3) System for Observation of Hayabusa2 Impact Experiment
NASA Astrophysics Data System (ADS)
Sawada, Hirotaka; Ogawa, Kazunori; Shirai, Kei; Kimura, Shinichi; Hiromori, Yuichi; Mimasu, Yuya
2017-07-01
An asteroid exploration probe "Hayabusa2", that was developed by Japan Aerospace Exploration Agency (JAXA), was launched on December 3rd, 2014 to challenge complicated and accurate operations during the mission phase around the C-type asteroid 162137 Ryugu (1999 JU3) (Tsuda et al. in Acta Astron. 91:356-362, 2013). An impact experiment on a surface of the asteroid will be conducted using the Small Carry-on Impactor (SCI) system, which will be the world's first artificial crater creation experiment on asteroids (Saiki et al. in Proc. International Astronautical Congress, IAC-12.A3.4.8, 2012, Acta Astron. 84:227-236, 2013a; Proc. International Symposium on Space Technology and Science, 2013b). We developed a new micro Deployable CAMera (DCAM3) system for remote observations of the impact phenomenon applying our conventional DCAM technology that is one of the smallest probes in space missions and gained a great success in past Japanese mission IKAROS (Interplanetary Kite-craft Accelerated by Radiation Of the Sun). DCAM3 is a miniaturized separable unit that contains two cameras and radio communication devices for transmission image data to the mothership "Hayabusa2", and it observes the impact experiment at an unsafe region in where the "Hayabusa2" is difficult to stay because of a risk of exploding and impacting debris hitting. In this paper, we report details of the DCAM3 system and development results as well as our mission plan for the DCAM3 observation during the SCI experiment.
Experiment on Uav Photogrammetry and Terrestrial Laser Scanning for Ict-Integrated Construction
NASA Astrophysics Data System (ADS)
Takahashi, N.; Wakutsu, R.; Kato, T.; Wakaizumi, T.; Ooishi, T.; Matsuoka, R.
2017-08-01
In the 2016 fiscal year the Ministry of Land, Infrastructure, Transport and Tourism of Japan started a program integrating construction and ICT in earthwork and concrete placing. The new program named "i-Construction" focusing on productivity improvement adopts such new technologies as UAV photogrammetry and TLS. We report a field experiment to investigate whether the procedures of UAV photogrammetry and TLS following the standards for "i-Construction" are feasible or not. In the experiment we measured an embankment of about 80 metres by 160 metres immediately after earthwork was done on the embankment. We used two sets of UAV and camera in the experiment. One is a larger UAV enRoute Zion QC730 and its onboard camera Sony α6000. The other is a smaller UAV DJI Phantom 4 and its dedicated onboard camera. Moreover, we used a terrestrial laser scanner FARO Focus3D X330 based on the phase shift principle. The experiment results indicate that the procedures of UAV photogrammetry using a QC730 with an α6000 and TLS using a Focus3D X330 following the standards for "i-Construction" would be feasible. Furthermore, the experiment results show that UAV photogrammetry using a lower price UAV Phantom 4 was unable to satisfy the accuracy requirement for "i-Construction." The cause of the low accuracy by Phantom 4 is under investigation. We also found that the difference of image resolution on the ground would not have a great influence on the measurement accuracy in UAV photogrammetry.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Zero gravity tissue-culture laboratory
NASA Technical Reports Server (NTRS)
Cook, J. E.; Montgomery, P. O., Jr.; Paul, J. S.
1972-01-01
Hardware was developed for performing experiments to detect the effects that zero gravity may have on living human cells. The hardware is composed of a timelapse camera that photographs the activity of cell specimens and an experiment module in which a variety of living-cell experiments can be performed using interchangeable modules. The experiment is scheduled for the first manned Skylab mission.
ERIC Educational Resources Information Center
School Science Review, 1983
1983-01-01
Presented are physics experiments, laboratory procedures, demonstrations, and classroom materials/activities. Experiments include: speed of sound in carbon dioxide; inverse square law; superluminal velocities; and others. Equipment includes: current switch; electronic switch; and pinhole camera. Discussion of mechanics of walking is also included.…
The sequence measurement system of the IR camera
NASA Astrophysics Data System (ADS)
Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo
2011-08-01
Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.
A Daytime Aspect Camera for Balloon Altitudes
NASA Technical Reports Server (NTRS)
Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.; Six, N. Frank (Technical Monitor)
2001-01-01
We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600-1000 nm region of the spectrum, successfully provided daytime aspect information of approximately 10 arcsecond resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models, but the daytime stellar magnitude limit was lower than expected due to dispersion of red light by the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenbuerger, S.; Brandt, C.; Brochard, F.
2010-06-15
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
Characterization and optimization for detector systems of IGRINS
NASA Astrophysics Data System (ADS)
Jeong, Ueejeong; Chun, Moo-Young; Oh, Jae Sok; Park, Chan; Yuk, In-Soo; Oh, Heeyoung; Kim, Kang-Min; Ko, Kyeong Yeon; Pavel, Michael D.; Yu, Young Sam; Jaffe, Daniel T.
2014-07-01
IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by the Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). This spectrograph has H-band and K-band science cameras and a slit viewing camera, all three of which use Teledyne's λc~2.5μm 2k×2k HgCdTe HAWAII-2RG CMOS detectors. The two spectrograph cameras employ science grade detectors, while the slit viewing camera includes an engineering grade detector. Teledyne's cryogenic SIDECAR ASIC boards and JADE2 USB interface cards were installed to control those detectors. We performed experiments to characterize and optimize the detector systems in the IGRINS cryostat. We present measurements and optimization of noise, dark current, and referencelevel stability obtained under dark conditions. We also discuss well depth, linearity and conversion gain measurements obtained using an external light source.
NASA Astrophysics Data System (ADS)
Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.
2010-06-01
Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Simulation-based camera navigation training in laparoscopy-a randomized trial.
Nilsson, Cecilia; Sorensen, Jette Led; Konge, Lars; Westen, Mikkel; Stadeager, Morten; Ottesen, Bent; Bjerrum, Flemming
2017-05-01
Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera navigation skills during a laparoscopic cholecystectomy. The secondary outcome was technical skills after training, using a previously developed model for testing camera navigational skills. The exploratory outcome measured participants' motivation toward the task as an operating assistant. Thirty-six participants were randomized. No significant difference was found in the primary outcome between the three groups (p = 0.279). The secondary outcome showed no significant difference between the interventions groups, total time 167 s (95% CI, 118-217) and 194 s (95% CI, 152-236) for the camera group and the procedure group, respectively (p = 0.369). Both interventions groups were significantly faster than the control group, 307 s (95% CI, 202-412), p = 0.018 and p = 0.045, respectively. On the exploratory outcome, the control group for two dimensions, interest/enjoyment (p = 0.030) and perceived choice (p = 0.033), had a higher score. Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher interest/enjoyment and perceived choice than the camera group.
Computing camera heading: A study
NASA Astrophysics Data System (ADS)
Zhang, John Jiaxiang
2000-08-01
An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
Experience with the UKIRT InSb array camera
NASA Technical Reports Server (NTRS)
Mclean, Ian S.; Casali, Mark M.; Wright, Gillian S.; Aspin, Colin
1989-01-01
The cryogenic infrared camera, IRCAM, has been operating routinely on the 3.8 m UK Infrared Telescope on Mauna Kea, Hawaii for over two years. The camera, which uses a 62x58 element Indium Antimonide array from Santa Barbara Research Center, was designed and built at the Royal Observatory, Edinburgh which operates UKIRT on behalf of the UK Science and Engineering Research Council. Over the past two years at least 60% of the available time on UKIRT has been allocated for IRCAM observations. Described here are some of the properties of this instrument and its detector which influence astronomical performance. Observational techniques and the power of IR arrays with some recent astronomical results are discussed.
STARS: a software application for the EBEX autonomous daytime star cameras
NASA Astrophysics Data System (ADS)
Chapman, Daniel; Didier, Joy; Hanany, Shaul; Hillbrand, Seth; Limon, Michele; Miller, Amber; Reichborn-Kjennerud, Britt; Tucker, Greg; Vinokurov, Yury
2014-07-01
The E and B Experiment (EBEX) is a balloon-borne telescope designed to probe polarization signals in the CMB resulting from primordial gravitational waves, gravitational lensing, and Galactic dust emission. EBEX completed an 11 day flight over Antarctica in January 2013 and data analysis is underway. EBEX employs two star cameras to achieve its real-time and post-flight pointing requirements. We wrote a software application called STARS to operate, command, and collect data from each of the star cameras, and to interface them with the main flight computer. We paid special attention to make the software robust against potential in-flight failures. We report on the implementation, testing, and successful in flight performance of STARS.
Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera
NASA Astrophysics Data System (ADS)
Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu
2016-09-01
We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.
A reaction-diffusion-based coding rate control mechanism for camera sensor networks.
Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki
2010-01-01
A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.
Source Camera Identification and Blind Tamper Detections for Images
2007-04-24
measures and image quality measures in camera identification problem was studied using conjunction with a KNN classifier to identify the feature sets...shots varying from nature scenes .-.. motorala to close-ups of people. We experimented with the KNN *~. * ny classifier (K=5) as well SVM algorithm of...on Acoustic, Speech and Signal Processing (ICASSP), France, May 2006, vol. 5, pp. 401-404. [9] H. Farid and S. Lyu, "Higher-order wavelet statistics
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-01-01
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-11-16
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.
1973-08-27
S73-33164 (27 Aug. 1973) --- A close-up view of Anita, one of the two common cross spiders “Araneus diadematus” aboard Skylab, is seen in this photographic reproduction of a color television transmission made by a TV camera aboard the Skylab space station in Earth orbit. A finger of one of the Skylab 3 crewmen points to Anita. The two spiders are housed in an enclosure onto which a motion picture and still camera are attached to record the spider’s attempt to build a web in the zero-gravity of space. The spider experiment (ED52) is one of 25 experiments selected by NASA for Skylab from more than 3,400 experiment proposals submitted by high school students throughout the nation. ED52 was submitted by 17-year old Judith S. Miles of Lexington, Mass. Photo credit: NASA
Mapping experiment with space station
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1986-01-01
Mapping of the Earth from space stations can be approached in two areas. One is to collect gravity data for defining topographic datum using Earth's gravity field in terms of spherical harmonics. The other is to search and explore techniques of mapping topography using either optical or radar images with or without reference to ground central points. Without ground control points, an integrated camera system can be designed. With ground control points, the position of the space station (camera station) can be precisely determined at any instant. Therefore, terrestrial topography can be precisely mapped either by conventional photogrammetric methods or by current digital technology of image correlation. For the mapping experiment, it is proposed to establish four ground points either in North America or Africa (including the Sahara desert). If this experiment should be successfully accomplished, it may also be applied to the defense charting systems.
NASA Center for Climate Simulation (NCCS) Presentation
NASA Technical Reports Server (NTRS)
Webster, William P.
2012-01-01
The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.
Abla, Oussama; Weitzman, Sheila; Blay, Jean-Yves; O’Neill, Brian Patrick; Abrey, Lauren E.; Neuwelt, Edward; Doolittle, Nancy D.; Baehring, Joachim; Pradhan, Kamnesh; Martin, S. Eric; Guerrera, Michael; Shah, Shafqat; Ghesquieres, Hervé; Silver, Michael; Betensky, Rebecca A.; Batchelor, Tracy
2014-01-01
Purpose To describe the demographic and clinical features and outcomes for children and adolescents with primary CNS lymphoma (PCNSL). Experimental Design A retrospective series of children and adolescents with PCNSL was assembled from ten cancer centers in three countries. Results Twenty-nine patients with a median age of 14 years were identified. Sixteen (55%) had Eastern Cooperative Oncology Group (ECOG) performance status (PS) ≥ 1. Front line therapy consisted of chemotherapy (CT) only in twenty patients (69%), while 9 (31%) had CT plus cranial radiotherapy. Most patients received methotrexate (MTX)-based regimens. Overall response rate was 86% (CR 69%, PR 17%). The 2 year PFS and OS rates were 61% and 86%, respectively; the 3 year OS was 82%. Univariate analyses were conducted for age (≤ 14 vs > 14 years), PS (0 or 1 vs >1), deep brain lesions, MTX dose, primary treatment with CT alone, intrathecal chemotherapy and high-dose therapy. Primary treatment with CT alone was associated with better overall response rates with an OR of 0.125 (p=0.02). There was a marginally significant relationship between higher doses of MTX and response (OR =1.5, p = 0.06). ECOG-PS of 0–1 was the only factor associated with better outcome with hazard ratios of 0.136 (p = 0.017) and 0.073(p = 0.033) for PFS and OS, respectively. Conclusion This is the largest series collected of pediatric PCNSL. The outcome of children and adolescents appears to be better than in adults. PS of 0–1 is associated with better survival. PMID:21224370
Tabouret, Emeline; Houillier, Caroline; Martin-Duverneuil, Nadine; Blonski, Marie; Soussain, Carole; Ghesquières, Herve; Houot, Roch; Larrieu, Delphine; Soubeyran, Pierre; Gressin, Remy; Gyan, Emmanuel; Chinot, Olivier; Taillandier, Luc; Choquet, Sylvain; Alentorn, Agusti; Leclercq, Delphine; Omuro, Antonio; Tanguy, Marie-Laure
2017-01-01
Abstract Background. Our aim was to review MRI characteristics of patients with primary CNS lymphoma (PCNSL) enrolled in a randomized phase II trial and to evaluate their potential prognostic value and patterns of relapse, including T2 fluid attenuated inversion recovery (FLAIR) MRI abnormalities. Methods. Neuroimaging findings in 85 patients with PCNSL enrolled in a prospective trial were reviewed blinded to outcomes. MRI characteristics and responses according to International PCNSL Collaborative Group (IPCG) criteria were correlated with progression-free survival (PFS) and overall survival (OS). Results. Multivariate analysis showed that objective response at 2 months (P < .001) and at end of treatment (P = .015) were predictors of prolonged OS. Infratentorial location (P = .008) and large (>11.4 cm3) enhancing tumor volume (P = .006) were associated with poor OS and PFS, respectively. Ratio of change in product of largest diameters at early MRI evaluation but not timing of complete response achievement (early vs delayed) was prognostic for OS. Sixty-nine patients relapsed. Relapse in the brain (n = 52) involved an initial enhancing site, a different site, or both in 46%, 40%, and 14% of patients, respectively. At baseline, non-enhancing T2-FLAIR hypersignal lesions distant from the enhancing tumor site were detected in 18 patients. These lesions markedly decreased (>50%) in 16 patients after chemotherapy, supporting their neoplastic nature. Of these patients, 10/18 relapsed, half (n = 5) in the initially non-enhancing T2-FLAIR lesions. Conclusions. Baseline tumor size and infratentorial localization are of prognostic value in PCNSL. Our findings provide evidence that non-enhancing FLAIR abnormalities may add to overall tumor burden, suggesting that response criteria should be refined to incorporate evaluation of T2-weighted/FLAIR sequences. PMID:27994065
Full-Frame Reference for Test Photo of Moon
NASA Technical Reports Server (NTRS)
2005-01-01
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images. Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across. The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.Hu, Wen; McCartt, Anne T
2016-09-01
In May 2007, Montgomery County, Maryland, implemented an automated speed enforcement program, with cameras allowed on residential streets with speed limits of 35 mph or lower and in school zones. In 2009, the state speed camera law increased the enforcement threshold from 11 to 12 mph over the speed limit and restricted school zone enforcement hours. In 2012, the county began using a corridor approach, in which cameras were periodically moved along the length of a roadway segment. The long-term effects of the speed camera program on travel speeds, public attitudes, and crashes were evaluated. Changes in travel speeds at camera sites from 6 months before the program began to 7½ years after were compared with changes in speeds at control sites in the nearby Virginia counties of Fairfax and Arlington. A telephone survey of Montgomery County drivers was conducted in Fall 2014 to examine attitudes and experiences related to automated speed enforcement. Using data on crashes during 2004-2013, logistic regression models examined the program's effects on the likelihood that a crash involved an incapacitating or fatal injury on camera-eligible roads and on potential spillover roads in Montgomery County, using crashes in Fairfax County on similar roads as controls. About 7½ years after the program began, speed cameras were associated with a 10% reduction in mean speeds and a 62% reduction in the likelihood that a vehicle was traveling more than 10 mph above the speed limit at camera sites. When interviewed in Fall 2014, 95% of drivers were aware of the camera program, 62% favored it, and most had received a camera ticket or knew someone else who had. The overall effect of the camera program in its modified form, including both the law change and the corridor approach, was a 39% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury. Speed cameras alone were associated with a 19% reduction in the likelihood that a crash resulted in an incapacitating or fatal injury, the law change was associated with a nonsignificant 8% increase, and the corridor approach provided an additional 30% reduction over and above the cameras. This study adds to the evidence that speed cameras can reduce speeding, which can lead to reductions in speeding-related crashes and crashes involving serious injuries or fatalities.
Diffraction experiments with infrared remote controls
NASA Astrophysics Data System (ADS)
Kuhn, Jochen; Vogt, Patrik
2012-02-01
In this paper we describe an experiment in which radiation emitted by an infrared remote control is passed through a diffraction grating. An image of the diffraction pattern is captured using a cell phone camera and then used to determine the wavelength of the radiation.
Raduga experiment: Multizonal photographing the Earth from the Soyuz-22 spacecraft
NASA Technical Reports Server (NTRS)
Ziman, Y.; Chesnokov, Y.; Dunayev, B.; Aksenov, V.; Bykovskiy, V.; Ioaskhim, R.; Myuller, K.; Choppe, V.; Volter, V.
1980-01-01
The main results of the scientific research and 'Raduga' experiment are reported. Technical parameters are presented for the MKF-6 camera and the MSP-4 projector. Characteristics of the obtained materials and certain results of their processing are reported.
NASA Astrophysics Data System (ADS)
Vollmer, Michael; Möllmann, Klaus-Peter
2012-09-01
We present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects as well as adiabatic cooling observed upon opening a bottle of champagne.
Car-Crash Experiment for the Undergraduate Laboratory
ERIC Educational Resources Information Center
Ball, Penny L.; And Others
1974-01-01
Describes an interesting, inexpensive, and highly motivating experiment to study uniform and accelerated motion by measuring the position of a car as it crashes into a rigid wall. Data are obtained from a sequence of pictures made by a high speed camera. (Author/SLH)
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
A novel dual-camera calibration method for 3D optical measurement
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang
2018-05-01
A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.
NASA Astrophysics Data System (ADS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-02-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
ERIC Educational Resources Information Center
Physics Education, 1988
1988-01-01
Described are five physics activities including two superconductor projects, synchronizing video camera movements, electrical analysis of a bicycle, and apparatus for the measurement of thermal conductivity. (YP)
NASA Astrophysics Data System (ADS)
Ćwiok, M.; Dominik, W.; Małek, K.; Mankiewicz, L.; Mrowca-Ciułacz, J.; Nawrocki, K.; Piotrowski, L. W.; Sitek, P.; Sokołowski, M.; Wrochna, G.; Żarnecki, A. F.
2007-06-01
Experiment “Pi of the Sky” is designed to search for prompt optical emission from GRB sources. 32 CCD cameras covering 2 steradians will monitor the sky continuously. The data will be analysed on-line in search for optical flashes. The prototype with 2 cameras operated at Las Campanas (Chile) since 2004 has recognised several outbursts of flaring stars and has given limits for a few GRB.
STS-49 MS Hieb changes ESC batteries on the middeck of OV-105
1992-05-08
STS049-S-218 (8 May 1992) --- Astronaut Richard J. Hieb, on Endeavour's middeck, changes batteries on the electronic still camera to begin a series of snapshots with the experiment, a detailed test objective. DTO 648 is making its fourth flight into space. At various times during the week-long mission, crewmembers will downlink images from the camera. The scene was recorded at 16:51:15:05 GMT, May 8, 1992.
ETR COMPRESSOR BUILDING, TRA643. CAMERA FACES NORTH. AIR HEATERS LINE ...
ETR COMPRESSOR BUILDING, TRA-643. CAMERA FACES NORTH. AIR HEATERS LINE UP AGAINST WALL, TO BE USED IN CONNECTION WITH ETR EXPERIMENTS. EACH HAD A HEAT OUTPUT OF 8 MILLION BTU PER HOUR, OPERATED AT 1260 DEGREES F. AND A PRESSURE OF 320 PSI. NOTE METAL WALLS AND ROOF. INL NEGATIVE NO. 56-3709. R.G. Larsen, Photographer, 11/13/1956 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.
Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L
2015-06-01
Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.
NASA Astrophysics Data System (ADS)
Arakawa, M.; Wada, K.; Saiki, T.; Kadono, T.; Takagi, Y.; Shirai, K.; Okamoto, C.; Yano, H.; Hayakawa, M.; Nakazawa, S.; Hirata, N.; Kobayashi, M.; Michel, P.; Jutzi, M.; Imamura, H.; Ogawa, K.; Sakatani, N.; Iijima, Y.; Honda, R.; Ishibashi, K.; Hayakawa, H.; Sawada, H.
2017-07-01
The Small Carry-on Impactor (SCI) equipped on Hayabusa2 was developed to produce an artificial impact crater on the primitive Near-Earth Asteroid (NEA) 162173 Ryugu (Ryugu) in order to explore the asteroid subsurface material unaffected by space weathering and thermal alteration by solar radiation. An exposed fresh surface by the impactor and/or the ejecta deposit excavated from the crater will be observed by remote sensing instruments, and a subsurface fresh sample of the asteroid will be collected there. The SCI impact experiment will be observed by a Deployable CAMera 3-D (DCAM3-D) at a distance of ˜1 km from the impact point, and the time evolution of the ejecta curtain will be observed by this camera to confirm the impact point on the asteroid surface. As a result of the observation of the ejecta curtain by DCAM3-D and the crater morphology by onboard cameras, the subsurface structure and the physical properties of the constituting materials will be derived from crater scaling laws. Moreover, the SCI experiment on Ryugu gives us a precious opportunity to clarify effects of microgravity on the cratering process and to validate numerical simulations and models of the cratering process.
Trained neurons-based motion detection in optical camera communications
NASA Astrophysics Data System (ADS)
Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho
2018-04-01
A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.
NASA Astrophysics Data System (ADS)
Nikolashkin, S. V.; Reshetnikov, A. A.
2017-11-01
The system of video surveillance during active rocket experiments in the Polar geophysical observatory "Tixie" and studies of the effects of "Soyuz" vehicle launches from the "Vostochny" cosmodrome over the territory of the Republic of Sakha (Yakutia) is presented. The created system consists of three AHD video cameras with different angles of view mounted on a common platform mounted on a tripod with the possibility of manual guiding. The main camera with high-sensitivity black and white CCD matrix SONY EXview HADII is equipped depending on the task with lenses "MTO-1000" (F = 1000 mm) or "Jupiter-21M " (F = 300 mm) and is designed for more detailed shooting of luminous formations. The second camera of the same type, but with a 30 degree angle of view. It is intended for shooting of the general plan and large objects, and also for a binding of coordinates of object on stars. The third color wide-angle camera (120 degrees) is designed to be connected to landmarks in the daytime, the optical axis of this channel is directed at 60 degrees down. The data is recorded on the hard disk of a four-channel digital video recorder. Tests of the original version of the system with two channels were conducted during the launch of the geophysical rocket in Tixie in September 2015 and showed its effectiveness.
ETTF - Extreme Temperature Translation Furnace experiment
1996-09-23
STS79-E-5275 (16 - 26 September 1996) --- Aboard the Spacehab double module in the Space Shuttle Atlantis' cargo bay, astronaut Jerome (Jay) Apt, mission specialist, checks a sample from the Extreme Temperature Translation Furnace (ETTF) experiment. The photograph was taken with the Electronic Still Camera (ESC).
NASA Astrophysics Data System (ADS)
Kröhnert, M.; Anderson, R.; Bumberger, J.; Dietrich, P.; Harpole, W. S.; Maas, H.-G.
2018-05-01
Grassland ecology experiments in remote locations requiring quantitative analysis of the biomass in defined plots are becoming increasingly widespread, but are still limited by manual sampling methodologies. To provide a cost-effective automated solution for biomass determination, several photogrammetric techniques are examined to generate 3D point cloud representations of plots as a basis, to estimate aboveground biomass on grassland plots, which is a key ecosystem variable used in many experiments. Methods investigated include Structure from Motion (SfM) techniques for camera pose estimation with posterior dense matching as well as the usage of a Time of Flight (TOF) 3D camera, a laser light sheet triangulation system and a coded light projection system. In this context, plants of small scales (herbage) and medium scales are observed. In the first pilot study presented here, the best results are obtained by applying dense matching after SfM, ideal for integration into distributed experiment networks.
View of the Columbia's remote manipulator system
1982-03-30
STS003-09-444 (22-30 March 1982) --- The darkness of space provides the backdrop for this scene of the plasma diagnostics package (PDR) experiment in the grasp of the end effector or ?hand? of the remote manipulator system (RMS) arm, and other components of the Office of Space Sciences (OSS-1) package in the aft section of the Columbia?s cargo hold. The PDP is a compact, comprehensive assembly of electromagnetic and particle sensors that will be used to study the interaction of the orbiter with its surrounding environment; to test the capabilities of the shuttle?s remote manipulator system; and to carry out experiments in conjunction with the fast pulse electron generator of the vehicle charging and potential experiment, another experiment on the OSS-1 payload pallet. This photograph was exposed with a 70mm handheld camera by the astronaut crew of STS-3, with a handheld camera aimed through the flight deck?s aft window. Photo credit: NASA
NASA Astrophysics Data System (ADS)
Donovan, D. C.; Buchenauer, D. A.; Watkins, J. G.; Leonard, A. W.; Lasnier, C. J.; Stangeby, P. C.
2011-10-01
The sheath power transmission factor (SPTF) is examined in DIII-D with a new IR camera, a more thermally robust Langmuir probe array, fast thermocouples, and a unique probe configuration on the Divertor Materials Evaluation System (DiMES). Past data collected from the fixed Langmuir Probes and Infrared Camera on DIII-D have indicated a SPTF near 1 at the strike point. Theory indicates that the SPTF should be approximately 7 and cannot be less than 5. SPTF values are calculated using independent measurements from the IR camera and fast thermocouples. Experiments have been performed with varying levels of electron cyclotron heating and neutral beam power. The ECH power does not involve fast ions, so the SPTF can be calculated and compared to previous experiments to determine the extent to which fast ions may be influencing the SPTF measurements, and potentially offer insight into the disagreement with the theory. Work supported in part by US DOE under DE-AC04-94AL85000, DE-FC02-04ER54698, and DE-AC52-07NA27344.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.E.; Roeske, F.
We have successfully fielded a Fiber Optics Radiation Experiment system (FOREX) designed for measuring material properties at high temperatures and pressures in an underground nuclear test. The system collects light from radiating materials and transmits it through several hundred meters of optical fibers to a recording station consisting of a streak camera with film readout. The use of fiber optics provides a faster time response than can presently be obtained with equalized coaxial cables over comparable distances. Fibers also have significant cost and physical size advantages over coax cables. The streak camera achieves a much higher information density than anmore » equivalent oscilloscope system, and it also serves as the light detector. The result is a wide bandwidth high capacity system that can be fielded at a relatively low cost in manpower, space, and materials. For this experiment, the streak camera had a 120 ns time window with a 1.2 ns time resolution. Dynamic range for the system was about 1000. Beam current statistical limitations were approximately 8% for a 0.3 ns wide data point at one decade above the threshold recording intensity.« less
FOREX-A Fiber Optics Diagnostic System For Study Of Materials At High Temperatures And Pressures
NASA Astrophysics Data System (ADS)
Smith, D. E.; Roeske, F.
1983-03-01
We have successfully fielded a Fiber Optics Radiation EXperiment system (FOREX) designed for measuring material properties at high temperatures and pressures on an underground nuclear test. The system collects light from radiating materials and transmits it through several hundred meters of optical fibers to a recording station consisting of a streak camera with film readout. The use of fiber optics provides a faster time response than can presently be obtained with equalized coaxial cables over comparable distances. Fibers also have significant cost and physical size advantages over coax cables. The streak camera achieves a much higher information density than an equivalent oscilloscope system, and it also serves as the light detector. The result is a wide bandwidth high capacity system that can be fielded at a relatively low cost in manpower, space, and materials. For this experiment, the streak camera had a 120 ns time window with a 1.2 ns time resolution. Dynamic range for the system was about 1000. Beam current statistical limitations were approximately 8% for a 0.3 ns wide data point at one decade above the threshold recording intensity.
Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Mu, Baozhong; Wang, Zhanshan; Fang, Zhiheng; Wang, Wei; Fu, Sizu
2016-10-01
Because grazing-incidence Kirkpatrick-Baez (KB) microscopes have better resolution and collection efficiency than pinhole cameras, they have been widely used for x-ray imaging diagnostics of laser inertial confinement fusion. The assembly and adjustment of a multichannel KB microscope must meet stringent requirements for image resolution and reproducible alignment. In the present study, an eight-channel KB microscope was developed for diagnostics by imaging self-emission x-rays with a framing camera at the Shenguang-II Update (SGII-Update) laser facility. A consistent object field of view is ensured in the eight channels using an assembly method based on conical reference cones, which also allow the intervals between the eight images to be tuned to couple with the microstrips of the x-ray framing camera. The eight-channel KB microscope was adjusted via real-time x-ray imaging experiments in the laboratory. This paper describes the details of the eight-channel KB microscope, its optical and multilayer design, the assembly and alignment methods, and results of imaging in the laboratory and at the SGII-Update.
Optimization of Close Range Photogrammetry Network Design Applying Fuzzy Computation
NASA Astrophysics Data System (ADS)
Aminia, A. S.
2017-09-01
Measuring object 3D coordinates with optimum accuracy is one of the most important issues in close range photogrammetry. In this context, network design plays an important role in determination of optimum position of imaging stations. This is, however, not a trivial task due to various geometric and radiometric constraints affecting the quality of the measurement network. As a result, most camera stations in the network are defined on a try and error basis based on the user's experience and generic network concept. In this paper, we propose a post-processing task to investigate the quality of camera positions right after image capturing to achieve the best result. To do this, a new fuzzy reasoning approach is adopted, in which the constraints affecting the network design are all modeled. As a result, the position of all camera locations is defined based on fuzzy rules and inappropriate stations are determined. The experiments carried out show that after determination and elimination of the inappropriate images using the proposed fuzzy reasoning system, the accuracy of measurements is improved and enhanced about 17% for the latter network.
View of the Russian BIO-5 Rasteniya-2/Lada-2 (Plants-2) plant growth experiment in the SM
2003-03-12
ISS006-E-44999 (12 March 2003) --- A view of the Russian BIO-5 Rasteniya-2/Lada-2 (Plants-2) plant growth experiment located in the Zvezda Service Module on the International Space Station (ISS). A camera used for recording progress of the experiment is visible on the right.
Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.
2001-01-01
The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
CCD imaging system for the EUV solar telescope
NASA Astrophysics Data System (ADS)
Gong, Yan; Song, Qian; Ye, Bing-Xun
2006-01-01
In order to develop the detector adapted to the space solar telescope, we have built a CCD camera system capable of working in the extra ultraviolet (EUV) band, which is composed of one phosphor screen, one intensified system using a photocathode/micro-channel plate(MCP)/ phosphor, one optical taper and one chip of front-illuminated (FI) CCD without screen windows. All of them were stuck one by one with optical glue. The working principle of the camera system is presented; moreover we have employed the mesh experiment to calibrate and test the CCD camera system in 15~24nm, the position resolution of about 19 μm is obtained at the wavelength of 17.1nm and 19.5nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacPhee, A. G., E-mail: macphee2@llnl.gov; Hatch, B. W.; Bell, P. M.
2016-11-15
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamicmore » range for the relevant part of the streak record.« less
Semantic Information Extraction of Lanes Based on Onboard Camera Videos
NASA Astrophysics Data System (ADS)
Tang, L.; Deng, T.; Ren, C.
2018-04-01
In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.
Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai
2017-05-01
This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.
MacPhee, A G; Dymoke-Bradshaw, A K L; Hares, J D; Hassett, J; Hatch, B W; Meadowcroft, A L; Bell, P M; Bradley, D K; Datte, P S; Landen, O L; Palmer, N E; Piston, K W; Rekow, V V; Hilsabeck, T J; Kilkenny, J D
2016-11-01
We report simulations and experiments that demonstrate an increase in spatial resolution of the NIF core diagnostic x-ray streak cameras by at least a factor of two, especially off axis. A design was achieved by using a corrector electron optic to flatten the field curvature at the detector plane and corroborated by measurement. In addition, particle in cell simulations were performed to identify the regions in the streak camera that contribute the most to space charge blurring. These simulations provide a tool for convolving synthetic pre-shot spectra with the instrument function so signal levels can be set to maximize dynamic range for the relevant part of the streak record.
Measuring the Temperature of the Ithaca College MOT Cloud using a CMOS Camera
NASA Astrophysics Data System (ADS)
Smucker, Jonathan; Thompson, Bruce
2015-03-01
We present our work on measuring the temperature of Rubidium atoms cooled using a magneto-optical trap (MOT). The MOT uses laser trapping methods and Doppler cooling to trap and cool Rubidium atoms to form a cloud that is visible to a CMOS Camera. The Rubidium atoms are cooled further using optical molasses cooling after they are released from the trap (by removing the magnetic field). In order to measure the temperature of the MOT we take pictures of the cloud using a CMOS camera as it expands and calculate the temperature based on the free expansion of the cloud. Results from the experiment will be presented along with a summary of the method used.
Pham, Quang Duc; Hayasaki, Yoshio
2015-01-01
We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.
Three-dimensional and multienergy gamma-ray simultaneous imaging by using a Si/CdTe Compton camera.
Suzuki, Yoshiyuki; Yamaguchi, Mitsutaka; Odaka, Hirokazu; Shimada, Hirofumi; Yoshida, Yukari; Torikai, Kota; Satoh, Takahiro; Arakawa, Kazuo; Kawachi, Naoki; Watanabe, Shigeki; Takeda, Shin'ichiro; Ishikawa, Shin-nosuke; Aono, Hiroyuki; Watanabe, Shin; Takahashi, Tadayuki; Nakano, Takashi
2013-06-01
To develop a silicon (Si) and cadmium telluride (CdTe) imaging Compton camera for biomedical application on the basis of technologies used for astrophysical observation and to test its capacity to perform three-dimensional (3D) imaging. All animal experiments were performed according to the Animal Care and Experimentation Committee (Gunma University, Maebashi, Japan). Flourine 18 fluorodeoxyglucose (FDG), iodine 131 ((131)I) methylnorcholestenol, and gallium 67 ((67)Ga) citrate, separately compacted into micro tubes, were inserted subcutaneously into a Wistar rat, and the distribution of the radioisotope compounds was determined with 3D imaging by using the Compton camera after the rat was sacrificed (ex vivo model). In a separate experiment, indium 111((111)In) chloride and (131)I-methylnorcholestenol were injected into a rat intravenously, and copper 64 ((64)Cu) chloride was administered into the stomach orally just before imaging. The isotope distributions were determined with 3D imaging after sacrifice by means of the list-mode-expectation-maximizing-maximum-likelihood method. The Si/CdTe Compton camera demonstrated its 3D multinuclear imaging capability by separating out the distributions of FDG, (131)I-methylnorcholestenol, and (67)Ga-citrate clearly in a test-tube-implanted ex vivo model. In the more physiologic model with tail vein injection prior to sacrifice, the distributions of (131)I-methylnorcholestenol and (64)Cu-chloride were demonstrated with 3D imaging, and the difference in distribution of the two isotopes was successfully imaged although the accumulation on the image of (111)In-chloride was difficult to visualize because of blurring at the low-energy region. The Si/CdTe Compton camera clearly resolved the distribution of multiple isotopes in 3D imaging and simultaneously in the ex vivo model.
Estimating the gaze of a virtuality human.
Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob
2013-04-01
The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.
3D Surface Reconstruction and Volume Calculation of Rills
NASA Astrophysics Data System (ADS)
Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.
2015-04-01
We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.
A LiDAR data-based camera self-calibration method
NASA Astrophysics Data System (ADS)
Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun
2018-07-01
To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.
View of Arabella, one of two Skylab spiders and her web
1973-08-16
SL3-108-1307 (July-September 1973) --- A close-up view of Arabella, one of the two Skylab 3 common cross spiders "Araneus diadematus," and the web it had spun in the zero-gravity of space aboard the Skylab space station cluster in Earth orbit. This picture was taken with a hand-held 35mm Nikon camera. During the 59-day Skylab 3 mission the two spiders, Arabella and Anita, were housed in an enclosure onto which a motion picture and a still camera were attempts to build a web in the weightless environment. The spider experiment (ED52) was one of 25 experiments selected for Skylab by NASA from more than 3,400 experiment proposals submitted by high school students throughout the nation. ED52 was submitted by 17-year-old Judith S. Miles of Lexington, Massachusetts. Anita died during the last week of the mission. Photo credit: NASA
Tron, Talia; Peled, Abraham; Grinsphoon, Alexander; Weinshall, Daphna
2016-08-01
Incongruity between emotional experience and its outwardly expression is one of the prominent symptoms in schizophrenia. Though widely reported and used in clinical evaluation, this symptom is inadequately defined in the literature and may be confused with mere affect flattening. In this study we used structured-light depth camera and dedicated software to automatically measure facial activity of schizophrenia patients and healthy individuals during an emotionally evocative task. We defined novel measures for the congruence of emotional experience and emotional expression and for Flat Affect, compared them between patients and controls, and examined their consistency with clinical evaluation. We found incongruity in schizophrenia to be manifested in a less specific range of facial expressions in response to similar emotional stimuli, while the emotional experience remains intact. Our study also suggests that when taking into consideration affect flatness, no contextually inappropriate facial expressions are evident.
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
NASA Astrophysics Data System (ADS)
Matsumura, T.; Kamiji, I.; Nakagiri, K.; Nanjo, H.; Nomura, T.; Sasao, N.; Shinkawa, T.; Shiomi, K.
2018-03-01
We have developed a beam-profile monitor (BPM) system to align the collimators for the neutral beam-line at the Hadron Experimental Facility of J-PARC. The system is composed of a phosphor screen and a CCD camera coupled to an image intensifier mounted on a remote control X- Y stage. The design and detailed performance studies of the BPM are presented. The monitor has a spatial resolution of better than 0.6 mm and a deviation from linearity of less than 1%. These results indicate that the BPM system meets the requirements to define collimator-edge positions for the beam-line tuning. Confirmation using the neutral beam for the KOTO experiment is also presented.
NASA Astrophysics Data System (ADS)
Barbosa, F.; Bessuille, J.; Chudakov, E.; Dzhygadlo, R.; Fanelli, C.; Frye, J.; Hardin, J.; Kelsey, J.; Patsyuk, M.; Schwarz, C.; Schwiening, J.; Stevens, J.; Shepherd, M.; Whitlatch, T.; Williams, M.
2017-12-01
The GlueX DIRC (Detection of Internally Reflected Cherenkov light) detector is being developed to upgrade the particle identification capabilities in the forward region of the GlueX experiment at Jefferson Lab. The GlueX DIRC will utilize four existing decommissioned BaBar DIRC bar boxes, which will be oriented to form a plane roughly 4 m away from the fixed target of the experiment. A new photon camera has been designed that is based on the SuperB FDIRC prototype. The full GlueX DIRC system will consist of two such cameras, with the first planned to be built and installed in 2017. We present the current status of the design and R&D, along with the future plans of the GlueX DIRC detector.
BDPU, Favier places new test chamber into experiment module in LMS-1 Spacelab
1996-07-09
STS078-301-021 (20 June - 7 July 1996) --- Payload specialist Jean-Jacques Favier, representing the French Space Agency (CNES), holds up a test container to a Spacelab camera. The test involves the Bubble Drop Particle Unit (BDPU), which Favier is showing to ground controllers at the Marshall Space Flight Center (MSFC) in order to check the condition of the unit prior to heating in the BDPU facility. The test container holds experimental fluid and allows experiment observation through optical windows. BDPU contains three internal cameras that are used to continuously downlink BDPU activity so that behavior of the bubbles can be monitored. Astronaut Richard M. Linnehan, mission specialist, conducts biomedical testing in the background.
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System
Manduchi, R.; Coughlan, J.; Ivanchenko, V.
2016-01-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System.
Manduchi, R; Coughlan, J; Ivanchenko, V
2008-07-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.
BRIC - Brown works with middeck experiment
1997-08-12
S85-E-5058 (12 August 1997) --- Astronaut Curtis L. Brown, Jr., commander, performs operations with an experiment called Biological Research in Canisters (BRIC) operations on the mid-deck of the Space Shuttle Discovery during flight day six. The photograph was taken with the Electronic Still Camera (ESC).
An Exercise in X-Ray Diffraction Using the Polymorphic Transition of Nickel Chromite.
ERIC Educational Resources Information Center
Chipman, David W.
1980-01-01
Describes a laboratory experiment appropriate for a course in either x-ray crystallography or mineralogy. The experiment permits the direct observation of a polymorphic transition in nickel chromite without the use of a special heating stage or heating camera. (Author/GS)
2011-01-11
and its variance σ2Ûi are determined. Ûi = ûi + Pu,EN (PEN )−1 [( Ejc Njc ) − ( êi n̂i )] (15) σ2 Ûi = Pui − P u,EN i ( PENi )−1 PEN,ui (16) where...screen; the operator can click a robot’s camera view to select it as the Focus Robot. The Focus Robot’s camera stream is enlarged and displayed in the
Shuttle sortie electro-optical instruments study
NASA Technical Reports Server (NTRS)
1974-01-01
A study to determine the feasibility of adapting existing electro-optical instruments (designed and sucessfully used for ground operations) for use on a shuttle sortie flight and to perform satisfactorily in the space environment is considered. The suitability of these two instruments (a custom made image intensifier camera system and an off-the-shelf secondary electron conduction television camera) to support a barium ion cloud experiment was studied for two different modes of spacelab operation - within the pressurized module and on the pallet.
Picosecond x-ray streak cameras
NASA Astrophysics Data System (ADS)
Averin, V. I.; Bryukhnevich, Gennadii I.; Kolesov, G. V.; Lebedev, Vitaly B.; Miller, V. A.; Saulevich, S. V.; Shulika, A. N.
1991-04-01
The first multistage image converter with an X-ray photocathode (UMI-93 SR) was designed in VNIIOFI in 1974 [1]. The experiments carried out in IOFAN pointed out that X-ray electron-optical cameras using the tube provided temporal resolution up to 12 picoseconds [2]. The later work has developed into the creation of the separate streak and intensifying tubes. Thus, PV-003R tube has been built on base of UMI-93SR design, fibre optically connected to PMU-2V image intensifier carrying microchannel plate.
Karimov, Jamshid H; Horvath, David; Sunagawa, Gengo; Byram, Nicole; Moazami, Nader; Golding, Leonard A R; Fukamachi, Kiyotaka
2015-12-01
Post-explant evaluation of the continuous-flow total artificial heart in preclinical studies can be extremely challenging because of the device's unique architecture. Determining the exact location of tissue regeneration, neointima formation, and thrombus is particularly important. In this report, we describe our first successful experience with visualizing the Cleveland Clinic continuous-flow total artificial heart using a custom-made high-definition miniature camera.
Performance of the Tachyon Time-of-Flight PET Camera
NASA Astrophysics Data System (ADS)
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Performance of the Tachyon Time-of-Flight PET Camera.
Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm 2 side of 6.15 × 6.15 × 25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Feasibility study of a gamma camera for monitoring nuclear materials in the PRIDE facility
NASA Astrophysics Data System (ADS)
Jo, Woo Jin; Kim, Hyun-Il; An, Su Jung; Lee, Chae Young; Song, Han-Kyeol; Chung, Yong Hyun; Shin, Hee-Sung; Ahn, Seong-Kyu; Park, Se-Hwan
2014-05-01
The Korea Atomic Energy Research Institute (KAERI) has been developing pyroprocessing technology, in which actinides are recovered together with plutonium. There is no pure plutonium stream in the process, so it has an advantage of proliferation resistance. Tracking and monitoring of nuclear materials through the pyroprocess can significantly improve the transparency of the operation and safeguards. An inactive engineering-scale integrated pyroprocess facility, which is the PyRoprocess Integrated inactive DEmonstration (PRIDE) facility, was constructed to demonstrate engineering-scale processes and the integration of each unit process. the PRIDE facility may be a good test bed to investigate the feasibility of a nuclear material monitoring system. In this study, we designed a gamma camera system for nuclear material monitoring in the PRIDE facility by using a Monte Carlo simulation, and we validated the feasibility of this system. Two scenarios, according to locations of the gamma camera, were simulated using GATE (GEANT4 Application for Tomographic Emission) version 6. A prototype gamma camera with a diverging-slat collimator was developed, and the simulated and experimented results agreed well with each other. These results indicate that a gamma camera to monitor the nuclear material in the PRIDE facility can be developed.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-01-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057
Volunteers Help Decide Where to Point Mars Camera
2015-07-22
This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823
NASA Astrophysics Data System (ADS)
Carbajal Gomez, Leopoldo; Del-Castillo-Negrete, Diego
2017-10-01
Developing avoidance or mitigation strategies of runaway electrons (RE) for the safe operation of ITER is imperative. Synchrotron radiation (SR) of RE is routinely used in current tokamak experiments to diagnose RE. We present the results of a newly developed camera diagnostic of SR for full-orbit kinetic simulations of RE in DIII-D-like plasmas that simultaneously includes: full-orbit effects, information of the spectral and angular distribution of SR of each electron, and basic geometric optics of a camera. We observe a strong dependence of the SR measured by the camera on the pitch angle distribution of RE, namely we find that crescent shapes of the SR on the camera pictures relate to RE distributions with small pitch angles, while ellipse shapes relate to distributions of RE with larger pitch angles. A weak dependence of the SR measured by the camera with the RE energy, value of the q-profile at the edge, and the chosen range of wavelengths is found. Furthermore, we observe that oversimplifying the angular distribution of the SR changes the synchrotron spectra and overestimates its amplitude. Research sponsored by the LDRD Program of ORNL, managed by UT-Battelle, LLC, for the U. S. DoE.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W. -S.; Vu, C.; ...
2015-01-23
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm 2 side of 6.15 ×6.15 ×25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according tomore » the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Digital Camera Control for Faster Inspection
NASA Technical Reports Server (NTRS)
Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel
2009-01-01
Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.
A near-Infrared SETI Experiment: Alignment and Astrometric precision
NASA Astrophysics Data System (ADS)
Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan
2016-06-01
Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.
Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.
Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-06-24
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.
Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras
Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823
NASA Technical Reports Server (NTRS)
Bothwell, Mary
2004-01-01
My division was charged with building a suite of cameras for the Mars Exploration Rover (MER) project. We were building the science cameras on the mass assembly, the microscope camera, and the hazard and navigation cameras for the rovers. Not surprisingly, a lot of folks were paying attention to our work - because there's really no point in landing on Mars if you can't take pictures. In Spring 2002 things were not looking good. The electronics weren't coming in, and we had to go back to the vendors. The vendors would change the design, send the boards back, and they wouldn't work. On our side, we had an instrument manager in charge who I believe has the potential to become a great manager, but when things got behind schedule he didn't have the experience to know what was needed to catch up. As division manager, I was ultimately responsible for seeing that all my project and instrument managers delivered their work. I had to make the decision whether or not to replace him.
Camera perspective bias in videotaped confessions: experimental evidence of its perceptual basis.
Ratcliff, Jennifer J; Lassiter, G Daniel; Schmidt, Heather C; Snyder, Celeste J
2006-12-01
The camera perspective from which a criminal confession is videotaped influences later assessments of its voluntariness and the suspect's guilt. Previous research has suggested that this camera perspective bias is rooted in perceptual rather than conceptual processes, but these data are strictly correlational. In 3 experiments, the authors directly manipulated perceptual processing to provide stronger evidence of its mediational role. Prior to viewing a videotape of a simulated confession, participants were shown a photograph of the confessor's apparent victim. Participants in a perceptual interference condition were instructed to visualize the image of the victim in their minds while viewing the videotape; participants in a conceptual interference condition were instructed instead to rehearse an 8-digit number. Because mental imagery and actual perception draw on the same available resources, the authors anticipated that the former, but not the latter, interference task would disrupt the camera perspective bias, if indeed it were perceptually mediated. Results supported this conclusion.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows
NASA Astrophysics Data System (ADS)
Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.
2016-10-01
A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.
Light field geometry of a Standard Plenoptic Camera.
Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome
2014-11-03
The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
A poloidal section neutron camera for MAST upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sangaroon, S.; Weiszflog, M.; Cecconello, M.
2014-08-21
The Mega Ampere Spherical Tokamak Upgrade (MAST Upgrade) is intended as a demonstration of the physics viability of the Spherical Tokamak (ST) concept and as a platform for contributing to ITER/DEMO physics. Concerning physics exploitation, MAST Upgrade plasma scenarios can contribute to the ITER Tokamak physics particularly in the field of fast particle behavior and current drive studies. At present, MAST is equipped with a prototype neutron camera (NC). On the basis of the experience and results from previous experimental campaigns using the NC, the conceptual design of a neutron camera upgrade (NC Upgrade) is being developed. As part ofmore » the MAST Upgrade, the NC Upgrade is considered a high priority diagnostic since it would allow studies in the field of fast ions and current drive with good temporal and spatial resolution. In this paper, we explore an optional design with the camera array viewing the poloidal section of the plasma from different directions.« less
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems.
Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal
2011-01-01
This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006.
Machine vision based teleoperation aid
NASA Technical Reports Server (NTRS)
Hoff, William A.; Gatrell, Lance B.; Spofford, John R.
1991-01-01
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
Development of a 3-D visible limiter imaging system for the HSX stellarator
NASA Astrophysics Data System (ADS)
Buelo, C.; Stephey, L.; Anderson, F. S. B.; Eisert, D.; Anderson, D. T.
2017-12-01
A visible camera diagnostic has been developed to study the Helically Symmetric eXperiment (HSX) limiter plasma interaction. A straight line view from the camera location to the limiter was not possible due to the complex 3D stellarator geometry of HSX, so it was necessary to insert a mirror/lens system into the plasma edge. A custom support structure for this optical system tailored to the HSX geometry was designed and installed. This system holds the optics tube assembly at the required angle for the desired view to both minimize system stress and facilitate robust and repeatable camera positioning. The camera system has been absolutely calibrated and using Hα and C-III filters can provide hydrogen and carbon photon fluxes, which through an S/XB coefficient can be converted into particle fluxes. The resulting measurements have been used to obtain the characteristic penetration length of hydrogen and C-III species. The hydrogen λiz value shows reasonable agreement with the value predicted by a 1D penetration length calculation.
Automatic Forest-Fire Measuring Using Ground Stations and Unmanned Aerial Systems
Martínez-de Dios, José Ramiro; Merino, Luis; Caballero, Fernando; Ollero, Anibal
2011-01-01
This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006. PMID:22163958
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
Fisheye image rectification using spherical and digital distortion models
NASA Astrophysics Data System (ADS)
Li, Xin; Pi, Yingdong; Jia, Yanling; Yang, Yuhui; Chen, Zhiyong; Hou, Wenguang
2018-02-01
Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.
NASA Astrophysics Data System (ADS)
Reginald, Nelson Leslie; Gopalswamy, Natchimuthuk; Guhathakurta, Madhulika; Yashiro, Seiji
2016-05-01
Experiments that require polarized brightness measurements, traditionally have done so by taking three successive images through a polarizer that is rotated through three well-defined angles. With the advent of the polarization camera, the polarized brightness can be measured from a single image. This also eliminates the need for a polarizer and the associated rotator mechanisms and can contribute towards less weight, size, less power requirements, and importantly higher temporal resolution. We intend to demonstrate the capabilities of the polarization camera by conducting a field experiment in conjunction with the total solar eclipse of 21 August 2017 using the Imaging Spectrograph of Coronal Electrons (ISCORE) instrument (Reginald et. al., solar physics, 2009, 260, 347-361). In this instrumental concept four K-coronal images of the corona through four filters centered at 385.0, 398.7, 410.0, 423.3 nm with a bandpass of 4 nm are expected to allow us to determine the coronal electron temperature and electron speed all around the corona. In order to determine the K-coronal brightness through each filter, we would have to take three images by rotating a polarizer through three angles for each of the filters, and it is not feasible owing to the short durations of total solar eclipses. Therefore, in the past we have assumed the total brightness (F + K) measured by each of the four filters to represent K-coronal brightness, which is true in low solar corona. However, with the advent of the polarization camera we can now measure the Stokes Polarization Parameters on a pixel by pixel basis for every image taken by the polarization camera. This allows us to independently quantify the total brightness (K+F) and polarized brightness (K). Also in addition to the four filter images that allow us to measure the electron temperature and electron speed, taking an additional image without a filter will give us enough information to determine the electron density. This instrumental concept was first tried in conjunction with the total solar eclipse of 9 March 2016 in Maba, Indonesia, but was unfortunately clouded out.
NASA Astrophysics Data System (ADS)
Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir; Azraai, Nur Zaidi
2017-07-01
In Malay world, there is a spirit traditional ritual where they use it as healing practices or for normal life. Malay martial arts (silat) also is not exceptional where some branch of silat have spirit traditional ritual where they said can help them in combat. In this paper, we will not use any ritual, instead we will use some medicinal and environment change when they are performing. There will be 2 performers (fighter) selected, one of them have an experience in martial arts training and another performer does not have experience. Motion Capture (MOCAP) camera will help observe and analyze this move. 8 cameras have been placed in the MOCAP room 2 on each side of the wall facing toward the center of the room from every angle. This will help prevent the loss detection of a marker that been stamped on the limb of a performer. Passive marker has been used where it will reflect the infrared to the camera sensor. Infrared is generated by the source around the camera lens. A 60 kg punching bag was hung on the iron bar function as the target for the performer when throws a punch. Markers also have been stamped on the punching bag so we can detect the movement how far can it swing when hit by the performer. 2 performers will perform 2 moves each with the same position and posture. For every 2 moves, we have made the environment change without the performer notice about it. The first 2 punch with normal environment, second part we have played a positive music to change the performer's mood and third part we have put a medicine (cream/oil) on the skin of the performer. This medicine will make the skin feel a little bit hot. This process repeated to another performer with no experience. The position of this marker analyzed by the Cortex Motion Analysis software where from this data, we can estimate the kinetics and kinematics of the performer. It shows that the increase of kinetics for every part because of the change in the environment, and different result for the 2 performers.
1970-01-01
This photograph shows a telescopic camera for ultraviolet star photography for Skylab's Ultraviolet Panorama experiment (S183) placed in the Skylab airlock. The S183 experiment was designed to obtain ultraviolet photographs, at three wavelengths, of hot stars, clusters of stars, large stellar clouds in the Milky Way, and nuclei of other galaxies. The Marshall Space Flight Center had program responsibility for the development of Skylab hardware and experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However,more » the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less
Thermodynamics of Gases: Combustion Processes, Analysed in Slow Motion
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2013-01-01
We present a number of simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature relatively slow combustion processes of pure hydrogen as well as fast reactions involving oxy-hydrogen in a stoichiometric mixture. (Contains 4 figures.)
Skylab-3 Mission Onboard Photograph - Astronaut Bean working on Experiment S019
NASA Technical Reports Server (NTRS)
1973-01-01
This Skylab-3 mission onboard photograph shows Astronaut Alan Bean operating the Ultraviolet (UV) Stellar Astronomy experiment (S019) in the Skylab Airlock Module. The S019, a camera with a prism for UV star photography, studied the UV spectra of early-type stars and galaxies.
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
Compact Video Microscope Imaging System Implemented in Colloid Studies
NASA Technical Reports Server (NTRS)
McDowell, Mark
2002-01-01
Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.
Preliminary design of two Space Shuttle fluid physics experiments
NASA Technical Reports Server (NTRS)
Gat, N.; Kropp, J. L.
1984-01-01
The mid-deck lockers of the STS and the requirements for operating an experiment in this region are described. The design of the surface tension induced convection and the free surface phenomenon experiments use a two locker volume with an experiment unique structure as a housing. A manual mode is developed for the Surface Tension Induced Convection experiment. The fluid is maintained in an accumulator pre-flight. To begin the experiment, a pressurized gas drives the fluid into the experiment container. The fluid is an inert silicone oil and the container material is selected to be comparable. A wound wire heater, located axisymmetrically above the fluid can deliver three wattages to a spot on the fluid surface. These wattages vary from 1-15 watts. Fluid flow is observed through the motion of particles in the fluid. A 5 mw He/Ne laser illuminates the container. Scattered light is recorded by a 35mm camera. The free surface phenomena experiment consists of a trapezoidal cell which is filled from the bottom. The fluid is photographed at high speed using a 35mm camera which incorporated the entire cell length in the field of view. The assembly can incorporate four cells in one flight. For each experiment, an electronics block diagram is provided. A control panel concept is given for the surface induced convection. Both experiments are within the mid-deck locker weight and c-g limits.
Method used to test the imaging consistency of binocular camera's left-right optical system
NASA Astrophysics Data System (ADS)
Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui
2016-09-01
To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.
First experiences with ARNICA, the ARCETRI observatory imaging camera
NASA Astrophysics Data System (ADS)
Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.
1994-03-01
ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
Apollo 9 Mission image - S0-65 Multispectral Photography - Georgia
2009-02-19
AS09-26A-3792A (11 March 1969) --- Color infrared photograph of the Atlanta, Georgia area taken on March 11, 1969, by one of the four synchronized cameras of the Apollo 9 Earth Resources Survey (SO-65) experiment. At 11:21 a.m. (EST) when this picture was taken, the Apollo 9 spacecraft was at an altitude of 106 nautical miles, and the sun elevation was 47 degrees above the horizon. The location of the point on Earth's surface at which the four-camera combination was aimed was 33 degrees 10 minutes north latitude, and 84 degrees and 40 minutes west longitude. The other three cameras used: (B) black and white film with a red filter; (C) black and white infrared film; and (D) black and white film with a green filter.
Cassini Camera Contamination Anomaly: Experiences and Lessons Learned
NASA Technical Reports Server (NTRS)
Haemmerle, Vance R.; Gerhard, James H.
2006-01-01
We discuss the contamination 'Haze' anomaly for the Cassini Narrow Angle Camera (NAC), one of two optical telescopes that comprise the Imaging Science Subsystem (ISS). Cassini is a Saturn Orbiter with a 4-year nominal mission. The incident occurred in 2001, five months after Jupiter encounter during the Cruise phase and ironically at the resumption of planned maintenance decontamination cycles. The degraded optical performance was first identified by the Instrument Operations Team with the first ISS Saturn imaging six weeks later. A distinct haze of varying size from image to image marred the images of Saturn. A photometric star calibration of the Pleiades, 4 days after the incident, showed stars with halos. Analysis showed that while the halo's intensity was only 1 - 2% of the intensity of the central peak of a star, the halo contained 30 - 70% of its integrated flux. This condition would impact science return. In a review of our experiences, we examine the contamination control plan, discuss the analysis of the limited data available and describe the one-year campaign to remove the haze from the camera. After several long conservative heating activities and interim analysis of their results, the contamination problem as measured by the camera's point spread function was essentially back to preanomaly size and at a point where there would be more risk to continue. We stress the importance of the flexibility of operations and instrument design, the need to do early infight instrument calibration and continual monitoring of instrument performance.
Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael
2017-01-01
Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613
Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael
2017-07-04
Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Barbosa, F.; Bessuille, J.; Chudakov, E.; ...
2017-02-03
We present the GlueX DIRC (Detection of Internally Reflected Cherenkov light) detector that is being developed to upgrade the particle identification capabilities in the forward region of the GlueX experiment at Jefferson Lab. The GlueX DIRC will utilize four existing decommissioned BaBar DIRC bar boxes, which will be oriented to form a plane roughly 4 m away from the fixed target of the experiment. A new photon camera has been designed that is based on the SuperB FDIRC prototype. The full GlueX DIRC system will consist of two such cameras, with the first planned to be built and installed inmore » 2017. In addition, we present the current status of the design and R&D, along with the future plans of the GlueX DIRC detector.« less
NASA Technical Reports Server (NTRS)
Sutro, L. L.; Lerman, J. B.
1973-01-01
The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.
Murphy, P J; Morgan, P B; Patel, S; Marshall, J
1999-05-01
The non-contact corneal aesthesiometer (NCCA) assesses corneal sensitivity by using a controlled pulse of air, directed at the corneal surface. The purpose of this paper was to investigate whether corneal surface temperature change was a component in the mode of stimulation. Thermocouple experiment: A simple model corneal surface was developed that was composed of a moistened circle of filter paper placed on a thermocouple and mounted on a glass slide. The temperature change produced by different stimulus pressures was measured for five different ambient temperatures. Thermal camera experiment: Using a thermal camera, the corneal surface temperature change was measured in nine young, healthy subjects after exposure to different stimulus air pulses. Pulse duration was set at 0.9 s but was varied in pressure from 0.5 to 3.5 millibars. Thermocouple experiment: An immediate drop in temperature was detected by the thermocouple as soon as the air flow was incident on the filter paper. A greater temperature change was produced by increasing the pressure of the incident air flow. A relationship was found and a calibration curve plotted. Thermal camera experiment: For each subject, a drop in surface temperature was detected at each stimulus pressure. Furthermore, as the stimulus pressure increased, the induced reduction in temperature also increased. A relationship was found and a calibration curve plotted. The NCCA air-pulse stimulus was capable of producing a localized temperature change on the corneal surface. The principal mode of corneal nerve stimulation, by the NCCA air pulse, was the rate of temperature change of the corneal surface.
1972-01-01
This photograph describes details of the telescopic camera for ultraviolet star photography for Skylab's Ultraviolet Panorama experiment (S183) placed in the Skylab airlock. The S183 experiment was designed to obtain ultraviolet photographs at three wavelengths of hot stars, clusters of stars, large stellar clouds in the Milky Way, and nuclei of other galaxies. The Marshall Space Flight Center had program responsibility for the development of Skylab hardware and experiments.
Snaptran2 experiment mounted on dolly being hauled by shielded locomotive ...
Snaptran-2 experiment mounted on dolly being hauled by shielded locomotive from IET towards A&M turntable. Note leads from experiment gathered at coupling bar in lower right of view. Another dolly in view at left. Camera facing southeast. Photographer: Page Comiskey. Date: August 25, 1965. INEEL negative no. 65-4503 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Sensor 17 Thermal Isolation Mounting Structure (TIMS) Design Improvements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enstrom, K.
2015-09-04
The SENSOR 17 thermographic camera weighs approximately 0.5lbs, has a fundamental mode of 167 Hz, and experiences 0.75W of heat leakage in through the TIMS. The configuration, shown in Figure 1, is comprised of four 300 Series SST washers paired in tandem with P.E.I (Ultem 100) washers. The SENSOR 17 sensor is mounted to a 300 series stainless plate with A-shaped arms. The Plate can be assumed to be at ambient temperatures (≈293K) and the I.R. Mount needs to be cooled to 45K. It is attached to the tip of a cryocooler by a ‘cold strap’ and is assumed tomore » be at the temperature of the cold-strap (≈45K). During flights SENSOR 17 experiences excitations at frequencies centered around 10-30Hz, 60Hz, and 120Hz from the aircraft flight environment. The temporal progression described below depicts the 1st Modal shape at the systems resonant frequency. This simulation indicates Modal articulation will cause a pitch rate of the camera with respect to the body axis of the airplane. This articulation shows up as flutter in the camera.« less
A measurement system applicable for landslide experiments in the field.
Guo, Wen-Zhao; Xu, Xiang-Zhou; Wang, Wen-Long; Yang, Ji-Shan; Liu, Ya-Kun; Xu, Fei-Long
2016-04-01
Observation of gravity erosion in the field with strong sunshine and wind poses a challenge. Here, a novel topography meter together with a movable tent addresses the challenge. With the topography meter, a 3D geometric shape of the target surface can be digitally reconstructed. Before the commencement of a test, the laser generator position and the camera sightline should be adjusted with a sight calibrator. Typically, the topography meter can measure the gravity erosion on the slope with a gradient of 30°-70°. Two methods can be used to obtain a relatively clear video, despite the extreme steepness of the slopes. One method is to rotate the laser source away from the slope to ensure that the camera sightline remains perpendicular to the laser plane. Another way is to move the camera farther away from the slope in which the measured volume of the slope needs to be corrected; this method will reduce distortion of the image. In addition, installation of tent poles with concrete columns helps to surmount the altitude difference on steep slopes. Results observed by the topography meter in real landslide experiments are rational and reliable.
Compact 3D Camera for Shake-the-Box Particle Tracking
NASA Astrophysics Data System (ADS)
Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan
2017-11-01
Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.
Measurement of vibration using phase only correlation technique
NASA Astrophysics Data System (ADS)
Balachandar, S.; Vipin, K.
2017-08-01
A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.
Writing for the Big Screen: Literacy Experiences in a Moviemaking Project
ERIC Educational Resources Information Center
Bedard, Carol; Fuhrken, Charles
2011-01-01
An integrated language arts and technology program engaged students in reading and writing activities that funded an experience in moviemaking. With video cameras in hand, students, often working collaboratively, developed expanded views of the writing and revision processes as they created movies that mattered to them and found an audience beyond…
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
We present fascinating simple demonstration experiments recorded with high-speed cameras in the field of fluid dynamics. Examples include oscillations of falling droplets, effects happening upon impact of a liquid droplet into a liquid, the disintegration of extremely large droplets in free fall and the consequences of incompressibility. (Contains…
View of MISSE taken during Expedition Six
2003-01-01
ISS006-348-019 (January 2003) ---- Materials International Space Station Experiment (MISSE), a suitcase-sized experiment attached to the outside of the space station to expose hundreds of potential space construction materials to the environment, leading to stronger, more durable spacecraft construction. Photographed by one of the Expedition 6 crew members with a 35mm camera.
Measuring Stellar Temperatures: An Astrophysical Laboratory for Undergraduate Students
ERIC Educational Resources Information Center
Cenadelli, D.; Zeni, M.
2008-01-01
While astrophysics is a fascinating subject, it hardly lends itself to laboratory experiences accessible to undergraduate students. In this paper, we describe a feasible astrophysical laboratory experience in which the students are guided to take several stellar spectra, using a telescope, a spectrograph and a CCD camera, and perform a full data…
Picturing Leisure: Using Photovoice to Understand the Experience of Leisure and Dementia
ERIC Educational Resources Information Center
Genoe, M. Rebecca; Dupuis, Sherry L.
2013-01-01
Interviews and participant observation are commonly used to explore the experience of dementia, yet may not adequately capture perspectives of persons with dementia as communication changes. We used photovoice (i.e., using cameras in qualitative research) along with interviews and participant observation to explore meanings of leisure for persons…
A Simple Educational Method for the Measurement of Liquid Binary Diffusivities
ERIC Educational Resources Information Center
Rice, Nicholas P.; de Beer, Martin P.; Williamson, Mark E.
2014-01-01
A simple low-cost experiment has been developed for the measurement of the binary diffusion coefficients of liquid substances. The experiment is suitable for demonstrating molecular diffusion to small or large undergraduate classes in chemistry or chemical engineering. Students use a cell phone camera in conjunction with open-source image…
Looking at Distance Learning through Both Ends of the Camera.
ERIC Educational Resources Information Center
Whitworth, Joan M.
This investigation chronicled the experiences of an instructor and her students as they first experienced a distance course that utilized various technologies. Both the instructor and the students had limited or no experience with e-mail, use of the Internet, or the supporting software. The students were 33 elementary school teachers taking a…
Composite x-ray pinholes for time-resolved microphotography of laser compressed targets.
Attwood, D T; Weinstein, B W; Wuerker, R F
1977-05-01
Composite x-ray pinholes having dichroic properties are presented. These pinholes permit both x-ray imaging and visible alignment with micron accuracy by presenting different apparent apertures in these widely disparate regions of the spectrum. Their use is mandatory in certain applications in which the x-ray detection consists of a limited number of resolvable elements whose use one wishes to maximize. Mating the pinhole camera with an x-ray streaking camera is described, along with experiments which spatially and temporally resolve the implosion of laser irradiated targets.
A design of driving circuit for star sensor imaging camera
NASA Astrophysics Data System (ADS)
Li, Da-wei; Yang, Xiao-xu; Han, Jun-feng; Liu, Zhao-hui
2016-01-01
The star sensor is a high-precision attitude sensitive measuring instruments, which determine spacecraft attitude by detecting different positions on the celestial sphere. Imaging camera is an important portion of star sensor. The purpose of this study is to design a driving circuit based on Kodak CCD sensor. The design of driving circuit based on Kodak KAI-04022 is discussed, and the timing of this CCD sensor is analyzed. By the driving circuit testing laboratory and imaging experiments, it is found that the driving circuits can meet the requirements of Kodak CCD sensor.
View of Arabella, one of the two Skylab 3 spiders used in experiment
NASA Technical Reports Server (NTRS)
1973-01-01
A close-up view of Arabella, one of the two Skylab 3 common cross spiders 'Araneus diadematus,' and the web it had spun in the zero gravity of space aboard the Skylab space station cluster in Earth orbit. This is a photographic reproduction made from a color television transmission aboard Skylab. Arabella and Anita, were housed in an enclosure onto which a motion picture camera and a still camera were attached to record the spiders' attempts to build a web in the weightless environment.
DSLR Double Star Astrometry Using an Alt-Az Telescope
NASA Astrophysics Data System (ADS)
Frey, Thomas; Haworth, David
2014-07-01
The goal of this project was to determine if the double star's angular separation and position angle measurements could be successfully measured with a motor driven, alt-azimuth Dobsonian-mounted Newtonian telescope (without a field rotator), and a digital single-lens reflex (DSLR) camera. Additionally, the project was constrained by using as much existing equipment as much as possible, including an Apple MacBook Pro laptop and a Canon T2i camera. This project was additionally challenging because the first author had no experience with astrophotography.
Removal of instrument signature from Mariner 9 television images of Mars
NASA Technical Reports Server (NTRS)
Green, W. B.; Jepsen, P. L.; Kreznar, J. E.; Ruiz, R. M.; Schwartz, A. A.; Seidman, J. B.
1975-01-01
The Mariner 9 spacecraft was inserted into orbit around Mars in November 1971. The two vidicon camera systems returned over 7300 digital images during orbital operations. The high volume of returned data and the scientific objectives of the Television Experiment made development of automated digital techniques for the removal of camera system-induced distortions from each returned image necessary. This paper describes the algorithms used to remove geometric and photometric distortions from the returned imagery. Enhancement processing of the final photographic products is also described.
Staking out Curiosity Landing Site
2012-08-09
The geological context for the landing site of NASA Curiosity rover is visible in this image mosaic obtained by the High-Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
Vasu, Subith S.; Pryor, Owen; Barak, Samuel; ...
2017-03-12
Common definitions for ignition delay time are often hard to determine due to the issue of bifurcation and other non-idealities that result from high levels of CO 2 addition. Using high-speed camera imagery in comparison with more standard methods (e.g., pressure, emission, and laser absorption spectroscopy) to measure the ignition delay time, the effect of bifurcation has been examined in this study. Experiments were performed at pressures between 0.6 and 1.2 atm for temperatures between 1650 and 2040 K. The equivalence ratio for all experiments was kept at a constant value of 1 with methane as the fuel. The COmore » 2 mole fraction was varied between a value of X CO2 = 0.00 to 0.895. The ignition delay time was determined from three different measurements at the sidewall: broadband chemiluminescent emission captured via a photodetector, CH 4 concentrations determined using a distributed feedback interband cascade laser centered at 3403.4 nm, and pressure recorded via a dynamic Kistler type transducer. All methods for the ignition delay time were compared to high-speed camera images taken of the axial cross-section during combustion. Methane time-histories and the methane decay times were also measured using the laser. It was determined that the flame could be correlated to the ignition delay time measured at the side wall but that the flame as captured by the camera was not homogeneous as assumed in typical shock tube experiments. The bifurcation of the shock wave resulted in smaller flames with large boundary layers and that the flame could be as small as 30% of the cross-sectional area of the shock tube at the highest levels of CO 2 dilution. Here, comparisons between the camera images and the different ignition delay time methods show that care must be taken in interpreting traditional ignition delay data for experiments with large bifurcation effects as different methods in measuring the ignition delay time could result in different interpretations of kinetic mechanisms and impede the development of future mechanisms.« less
NASA Astrophysics Data System (ADS)
Mitishita, E.; Debiasi, P.; Hainosz, F.; Centeno, J.
2012-07-01
Digital photogrammetric products from the integration of imagery and lidar datasets are a reality nowadays. When the imagery and lidar surveys are performed together and the camera is connected to the lidar system, a direct georeferencing can be applied to compute the exterior orientation parameters of the images. Direct georeferencing of the images requires accurate interior orientation parameters to perform photogrammetric application. Camera calibration is a procedure applied to compute the interior orientation parameters (IOPs). Calibration researches have established that to obtain accurate IOPs, the calibration must be performed with same or equal condition that the photogrammetric survey is done. This paper shows the methodology and experiments results from in situ self-calibration using a simultaneous images block and lidar dataset. The calibration results are analyzed and discussed. To perform this research a test field was fixed in an urban area. A set of signalized points was implanted on the test field to use as the check points or control points. The photogrammetric images and lidar dataset of the test field were taken simultaneously. Four strips of flight were used to obtain a cross layout. The strips were taken with opposite directions of flight (W-E, E-W, N-S and S-N). The Kodak DSC Pro SLR/c digital camera was connected to the lidar system. The coordinates of the exposition station were computed from the lidar trajectory. Different layouts of vertical control points were used in the calibration experiments. The experiments use vertical coordinates from precise differential GPS survey or computed by an interpolation procedure using the lidar dataset. The positions of the exposition stations are used as control points in the calibration procedure to eliminate the linear dependency of the group of interior and exterior orientation parameters. This linear dependency happens, in the calibration procedure, when the vertical images and flat test field are used. The mathematic correlation of the interior and exterior orientation parameters are analyzed and discussed. The accuracies of the calibration experiments are, as well, analyzed and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasu, Subith S.; Pryor, Owen; Barak, Samuel
Common definitions for ignition delay time are often hard to determine due to the issue of bifurcation and other non-idealities that result from high levels of CO 2 addition. Using high-speed camera imagery in comparison with more standard methods (e.g., pressure, emission, and laser absorption spectroscopy) to measure the ignition delay time, the effect of bifurcation has been examined in this study. Experiments were performed at pressures between 0.6 and 1.2 atm for temperatures between 1650 and 2040 K. The equivalence ratio for all experiments was kept at a constant value of 1 with methane as the fuel. The COmore » 2 mole fraction was varied between a value of X CO2 = 0.00 to 0.895. The ignition delay time was determined from three different measurements at the sidewall: broadband chemiluminescent emission captured via a photodetector, CH 4 concentrations determined using a distributed feedback interband cascade laser centered at 3403.4 nm, and pressure recorded via a dynamic Kistler type transducer. All methods for the ignition delay time were compared to high-speed camera images taken of the axial cross-section during combustion. Methane time-histories and the methane decay times were also measured using the laser. It was determined that the flame could be correlated to the ignition delay time measured at the side wall but that the flame as captured by the camera was not homogeneous as assumed in typical shock tube experiments. The bifurcation of the shock wave resulted in smaller flames with large boundary layers and that the flame could be as small as 30% of the cross-sectional area of the shock tube at the highest levels of CO 2 dilution. Here, comparisons between the camera images and the different ignition delay time methods show that care must be taken in interpreting traditional ignition delay data for experiments with large bifurcation effects as different methods in measuring the ignition delay time could result in different interpretations of kinetic mechanisms and impede the development of future mechanisms.« less
NASA Technical Reports Server (NTRS)
Kenney, G. P.
1974-01-01
The S190B Earth Terrain Camera (ETC) operated acceptably for all of its scheduled EREP passes throughout the SL2 mission. The crew reported no problems in unstowing the camera, changing filters, installing the ETC window in the SAL, or installing the camera onto the window. The ETC was operated for a total of seven times with no failures. The clock on the ETC was checked on DOY 170 (June 19, 1973) and was found to be 30 min. and 58 sec. slower than GMT. The change in time was expected since a similar circumstance was experienced during ETC qualification testing for launch vibration. A leak existed in the seal of the spare magazine to the camera vacuum interface. For EREP passes 08 and 10, black-and-white film EK 3414 (roll no. 82) was installed in this spare magazine. Since there was an audible hiss, the vacuum hose was not connected to the camera. This caused the vacuum platen to be inoperable, resulting in some degradation in resolution for this roll of film. The vegetation of the South American jungle areas proved to be much darker than vegetation found in the United States, and was consequently about 1/2 stop underexposed in all cases.
Line following using a two camera guidance system for a mobile robot
NASA Astrophysics Data System (ADS)
Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.
1996-10-01
Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.
The Effects of Radiation on Imagery Sensors in Space
NASA Technical Reports Server (NTRS)
Mathis, Dylan
2007-01-01
Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
A position and attitude vision measurement system for wind tunnel slender model
NASA Astrophysics Data System (ADS)
Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi
2014-11-01
A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.
NASA Astrophysics Data System (ADS)
Li, J.; Wu, Z.; Wei, X.; Zhang, Y.; Feng, F.; Guo, F.
2018-04-01
Cross-calibration has the advantages of high precision, low resource requirements and simple implementation. It has been widely used in recent years. The four wide-field-of-view (WFV) cameras on-board Gaofen-1 satellite provide high spatial resolution and wide combined coverage (4 × 200 km) without onboard calibration. In this paper, the four-band radiometric cross-calibration coefficients of WFV1 camera were obtained based on radiation and geometry matching taking Landsat 8 OLI (Operational Land Imager) sensor as reference. Scale Invariant Feature Transform (SIFT) feature detection method and distance and included angle weighting method were introduced to correct misregistration of WFV-OLI image pair. The radiative transfer model was used to eliminate difference between OLI sensor and WFV1 camera through the spectral match factor (SMF). The near-infrared band of WFV1 camera encompasses water vapor absorption bands, thus a Look Up Table (LUT) for SMF varies from water vapor amount is established to estimate the water vapor effects. The surface synchronization experiment was designed to verify the reliability of the cross-calibration coefficients, which seem to perform better than the official coefficients claimed by the China Centre for Resources Satellite Data and Application (CCRSDA).
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
3D digital image correlation using a single 3CCD colour camera and dichroic filter
NASA Astrophysics Data System (ADS)
Zhong, F. Q.; Shao, X. X.; Quan, C.
2018-04-01
In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Bartram, Scott M.
2001-01-01
A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.
Quantifying plant colour and colour difference as perceived by humans using digital images.
Kendal, Dave; Hauser, Cindy E; Garrard, Georgia E; Jellinek, Sacha; Giljohann, Katherine M; Moore, Joslin L
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management.
Quantifying Plant Colour and Colour Difference as Perceived by Humans Using Digital Images
Kendal, Dave; Hauser, Cindy E.; Garrard, Georgia E.; Jellinek, Sacha; Giljohann, Katherine M.; Moore, Joslin L.
2013-01-01
Human perception of plant leaf and flower colour can influence species management. Colour and colour contrast may influence the detectability of invasive or rare species during surveys. Quantitative, repeatable measures of plant colour are required for comparison across studies and generalisation across species. We present a standard method for measuring plant leaf and flower colour traits using images taken with digital cameras. We demonstrate the method by quantifying the colour of and colour difference between the flowers of eleven grassland species near Falls Creek, Australia, as part of an invasive species detection experiment. The reliability of the method was tested by measuring the leaf colour of five residential garden shrub species in Ballarat, Australia using five different types of digital camera. Flowers and leaves had overlapping but distinct colour distributions. Calculated colour differences corresponded well with qualitative comparisons. Estimates of proportional cover of yellow flowers identified using colour measurements correlated well with estimates obtained by measuring and counting individual flowers. Digital SLR and mirrorless cameras were superior to phone cameras and point-and-shoot cameras for producing reliable measurements, particularly under variable lighting conditions. The analysis of digital images taken with digital cameras is a practicable method for quantifying plant flower and leaf colour in the field or lab. Quantitative, repeatable measurements allow for comparisons between species and generalisations across species and studies. This allows plant colour to be related to human perception and preferences and, ultimately, species management. PMID:23977275
Studies on the formation, temporal evolution and forensic applications of camera "fingerprints".
Kuppuswamy, R
2006-06-02
A series of experiments was conducted by exposing negative film in brand new cameras of different make and model. The exposures were repeated at regular time intervals spread over a period of 2 years. The processed film negatives were studied under a stereomicroscope (10-40x) in transmitted illumination for the presence of the characterizing features on their four frame-edges. These features were then related to those present on the masking frame of the cameras by examining the latter in reflected light stereomicroscopy (10-40x). The purpose of the study was to determine the origin and permanence of the frame-edge-marks, and also the processes by which the marks may probably alter with time. The investigations have arrived at the following conclusions: (i) the edge-marks have originated principally from the imperfections received on the film mask from the manufacturing and also occasionally from the accumulated dirt, dust and fiber on the film mask over an extended time period. (ii) The edge profiles of the cameras have remained fixed over a considerable period of time so as to be of a valuable identification medium. (iii) The marks are found to be varying in nature even with those cameras manufactured at similar time. (iv) The influence of f/number and object distance has great effect in the recording of the frame-edge marks during exposure of the film. The above findings would serve as a useful addition to the technique of camera edge-mark comparisons.
Camera Installation on a Beach AT-11
1950-02-21
Researchers at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory conducted an extensive investigation into the composition of clouds and their effect on aircraft icing. The researcher in this photograph is installing cameras on a Beach AT-11 Kansan in order to photograph water droplets during flights through clouds. The twin engine AT-11 was the primary training aircraft for World War II bomber crews. The NACA acquired this aircraft in January 1946, shortly after the end of the war. The NACA Lewis’ icing research during the war focused on the resolution of icing problems for specific military aircraft. In 1947 the laboratory broadened its program and began systematically measuring and categorizing clouds and water droplets. The three main thrusts of the Lewis icing flight research were the development of better instrumentation, the accumulation of data on ice buildup during flight, and the measurement of droplet sizes in clouds. The NACA researchers developed several types of measurement devices for the icing flights, including modified cameras. The National Research Council of Canada experimented with high-speed cameras with a large magnification lens to photograph the droplets suspended in the air. In 1951 NACA Lewis developed and flight tested their own camera with a magnification of 32. The camera, mounted to an external strut, could be used every five seconds as the aircraft reached speeds up to 150 miles per hour. The initial flight tests through cumulus clouds demonstrated that droplet size distribution could be studied.
Mini AERCam: A Free-Flying Robot for Space Inspection
NASA Technical Reports Server (NTRS)
Fredrickson, Steven
2001-01-01
The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
Keyboard before Head Tracking Depresses User Success in Remote Camera Control
NASA Astrophysics Data System (ADS)
Zhu, Dingyun; Gedeon, Tom; Taylor, Ken
In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.
A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roquemore, A; Maingi, R; Lasnier, C
2007-06-19
In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX smallmore » type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.« less
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus
2016-12-01
The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.
The Effect of Transition Type in Multi-View 360° Media.
MacQuarrie, Andrew; Steed, Anthony
2018-04-01
360° images and video have become extremely popular formats for immersive displays, due in large part to the technical ease of content production. While many experiences use a single camera viewpoint, an increasing number of experiences use multiple camera locations. In such multi-view 360° media (MV360M) systems, a visual effect is required when the user transitions from one camera location to another. This effect can take several forms, such as a cut or an image-based warp, and the choice of effect may impact many aspects of the experience, including issues related to enjoyment and scene understanding. To investigate the effect of transition types on immersive MV360M experiences, a repeated-measures experiment was conducted with 31 participants. Wearing a head-mounted display, participants explored four static scenes, for which multiple 360° images and a reconstructed 3D model were available. Three transition types were examined: teleport, a linear move through a 3D model of the scene, and an image-based transition using a Möbius transformation. The metrics investigated included spatial awareness, users' movement profiles, transition preference and the subjective feeling of moving through the space. Results indicate that there was no significant difference between transition types in terms of spatial awareness, while significant differences were found for users' movement profiles, with participants taking 1.6 seconds longer to select their next location following a teleport transition. The model and Möbius transitions were significantly better in terms of creating the feeling of moving through the space. Preference was also significantly different, with model and teleport transitions being preferred over Möbius transitions. Our results indicate that trade-offs between transitions will require content creators to think carefully about what aspects they consider to be most important when producing MV360M experiences.
A Bionic Camera-Based Polarization Navigation Sensor
Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai
2014-01-01
Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029
Time-resolved X-ray excited optical luminescence using an optical streak camera
NASA Astrophysics Data System (ADS)
Ward, M. J.; Regier, T. Z.; Vogt, J. M.; Gordon, R. A.; Han, W.-Q.; Sham, T. K.
2013-03-01
We report the development of a time-resolved XEOL (TR-XEOL) system that employs an optical streak camera. We have conducted TR-XEOL experiments at the Canadian Light Source (CLS) operating in single bunch mode with a 570 ns dark gap and 35 ps electron bunch pulse, and at the Advanced Photon Source (APS) operating in top-up mode with a 153 ns dark gap and 33.5 ps electron bunch pulse. To illustrate the power of this technique we measured the TR-XEOL of solid-solution nanopowders of gallium nitride - zinc oxide, and for the first time have been able to resolve near-band-gap (NBG) optical luminescence emission from these materials. Herein we will discuss the development of the streak camera TR-XEOL technique and its application to the study of these novel materials.
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
Apollo 9 Mission image - S0-65 Multispectral Photography - Georgia
2009-02-19
AS09-26A-3816A (12 March 1969) --- Color infrared photograph of the Atlantic coast of Georgia, Brunswick area, taken on March 12, 1969, by one of the four synchronized cameras of the Apollo 9 Earth Resources Survey SO65 Experiment. At 11:35 a.m. (EST) when this picture was made the Apollo 9 spacecraft was at an altitude of 102 nautical miles, and the sun elevation was 51 degrees above the horizon. The location of the point on Earth's surface at which the four-camera combination was aimed 31 degrees 16 minutes north latitude, and 81 degrees 17 minutes west longitude. The other three cameras used: (B) black and white film with a red filter; (C) black and white infrared film; and (D) black and white film with a green filter.
Design of a CAN bus interface for photoelectric encoder in the spaceflight camera
NASA Astrophysics Data System (ADS)
Sun, Ying; Wan, Qiu-hua; She, Rong-hong; Zhao, Chang-hai; Jiang, Yong
2009-05-01
In order to make photoelectric encoder usable in a spaceflight camera which adopts CAN bus as the communication method, CAN bus interface of the photoelectric encoder is designed in this paper. CAN bus interface hardware circuit of photoelectric encoder consists of CAN bus controller SJA 1000, CAN bus transceiver TJA1050 and singlechip. CAN bus interface controlling software program is completed in C language. A ten-meter shield twisted pair line is used as the transmission medium in the spaceflight camera, and speed rate is 600kbps.The experiments show that: the photoelectric encoder with CAN bus interface which has the advantages of more reliability, real-time, transfer rate and transfer distance overcomes communication line's shortcomings of classical photoelectric encoder system. The system works well in automatic measuring and controlling system.
Kohoutek, Tobias K.; Mautz, Rainer; Wegner, Jan D.
2013-01-01
We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML). What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF) range camera further complicated by a markerless configuration. We propose to estimate the camera's pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance). PMID:23435055
SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.
Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S
2012-06-01
To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.
Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C
2017-02-07
The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.
Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging
NASA Astrophysics Data System (ADS)
Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.
2006-12-01
In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.
NASA Astrophysics Data System (ADS)
Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.
2008-12-01
Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Automated tracking of a figure skater by using PTZ cameras
NASA Astrophysics Data System (ADS)
Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi
2009-08-01
In this paper, a system for automated real-time tracking of a figure skater moving on an ice rink by using PTZ cameras is presented. This system is intended for support in training of skating, for example, as a tool for recording and evaluation of his/her motion performances. In the processing procedure of the system, an ice rink region is extracted first from a video image by region growing method, then one of hole components in the obtained rink region is extracted as a skater region. If there exists no hole component, a skater region is estimated from horizontal and vertical intensity projections of the rink region. Each camera is automatically panned and/or tilted so as to keep the skater region on almost the center of the image, and also zoomed so as to keep the height of the skater region within an appropriate range. In the experiments using 5 practical video images of skating, it was shown that the extraction rate of the skater region was almost 90%, and tracking with camera control was successfully done for almost all of the cases used here.
The diagnosing of plasmas using spectroscopy and imaging on Proto-MPEX
NASA Astrophysics Data System (ADS)
Baldwin, K. A.; Biewer, T. M.; Crouse Powers, J.; Hardin, R.; Johnson, S.; McCleese, A.; Shaw, G. C.; Showers, M.; Skeen, C.
2015-11-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. We tested and learned to use tools of spectroscopy and imaging. These tools consist of a spectrometer, a high speed camera, an infrared camera, and a thermocouple. The spectrometer measures the color of the light from the plasma and its intensity. We also used a high speed camera to see how the magnetic field acts on the plasma, and how it is heated to the fourth state of matter. The thermocouples measure the temperature of the objects they are placed against, which in this case are the end plates of the machine. We also used the infrared camera to see the heat pattern of the plasma on the end plates. Data from these instruments will be shown. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725, and the Oak Ridge Associated Universities ARC program.
RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information
Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun
2016-01-01
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256
An HDR imaging method with DTDI technology for push-broom cameras
NASA Astrophysics Data System (ADS)
Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin
2018-03-01
Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.
NASA Astrophysics Data System (ADS)
Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin
2018-05-01
Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Using a digital video camera to examine coupled oscillations
NASA Astrophysics Data System (ADS)
Greczylo, T.; Debowska, E.
2002-07-01
In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.
NASA Astrophysics Data System (ADS)
Williams, B. P.; Kjellstrand, B.; Jones, G.; Reimuller, J. D.; Fritts, D. C.; Miller, A.; Geach, C.; Limon, M.; Hanany, S.; Kaifler, B.; Wang, L.; Taylor, M. J.
2017-12-01
PMC-Turbo is a NASA long-duration, high-altitude balloon mission that will deploy 7 high-resolution cameras to image polar mesospheric clouds (PMC) and measure gravity wave breakdown and turbulence. The mission has been enhanced by the addition of the DLR Balloon Lidar Experiment (BOLIDE) and an OH imager from Utah State University. This instrument suite will provide high horizontal and vertical resolution of the wave-modified PMC structure along a several thousand kilometer flight track. We have requested a flight from Kiruna, Sweden to Canada in June 2017 or McMurdo Base, Antarctica in Dec 2017. Three of the PMC camera systems were deployed on an aircraft and two tomographic ground sites for the High Level campaign in Canada in June/July 2017. On several nights the cameras observed PMC's with strong gravity wave breaking signatures. One PMC camera will piggyback on the Super Tiger mission scheduled to be launched in Dec 2017 from McMurdo, so we will obtain PMC images and wave/turbulence data from both the northern and southern hemispheres.
Video quality of 3G videophones for telephone cardiopulmonary resuscitation.
Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander
2008-01-01
We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.
New Approach for Environmental Monitoring and Plant Observation Using a Light-Field Camera
NASA Astrophysics Data System (ADS)
Schima, Robert; Mollenhauer, Hannes; Grenzdörffer, Görres; Merbach, Ines; Lausch, Angela; Dietrich, Peter; Bumberger, Jan
2015-04-01
The aim of gaining a better understanding of ecosystems and the processes in nature accentuates the need for observing exactly these processes with a higher temporal and spatial resolution. In the field of environmental monitoring, an inexpensive and field applicable imaging technique to derive three-dimensional information about plants and vegetation would represent a decisive contribution to the understanding of the interactions and dynamics of ecosystems. This is particularly true for the monitoring of plant growth and the frequently mentioned lack of morphological information about the plants, e.g. plant height, vegetation canopy, leaf position or leaf arrangement. Therefore, an innovative and inexpensive light-field (plenoptic) camera, the Lytro LF, and a stereo vision system, based on two industrial cameras, were tested and evaluated as possible measurement tools for the given monitoring purpose. In this instance, the usage of a light field camera offers the promising opportunity of providing three-dimensional information without any additional requirements during the field measurements based on one single shot, which represents a substantial methodological improvement in the area of environmental research and monitoring. Since the Lytro LF was designed as a daily-life consumer camera, it does not support depth or distance estimation or rather an external triggering by default. Therefore, different technical modifications and a calibration routine had to be figured out during the preliminary study. As a result, the used light-field camera was proven suitable as a depth and distance measurement tool with a measuring range of approximately one meter. Consequently, this confirms the assumption that a light field camera holds the potential of being a promising measurement tool for environmental monitoring purposes, especially with regard to a low methodological effort in field. Within the framework of the Global Change Experimental Facility Project, founded by the Helmholtz Centre for Environmental Research, and its large-scaled field experiments to investigate the influence of the climate change on different forms of land utilization, both techniques were installed and evaluated in a long-term experiment on a pilot-scaled maize field in late 2014. Based on this, it was possible to show the growth of the plants in dependence of time, showing a good accordance to the measurements, which were carried out by hand on a weekly basis. In addition, the experiment has shown that the light-field vision approach is applicable for the monitoring of the crop growth under field conditions, although it is limited to close range applications. Since this work was intended as a proof of concept, further research is recommended, especially with respect to the automation and evaluation of data processing. Altogether, this study is addressed to researchers as an elementary groundwork to improve the usage of the introduced light field imaging technique for the monitoring of plant growth dynamics and the three-dimensional modeling of plants under field conditions.
NASA Astrophysics Data System (ADS)
Colbert, Fred
2013-05-01
There has been a significant increase in the number of in-house Infrared Thermographic Predictive Maintenance programs for Electrical/Mechanical inspections as compared to out-sourced programs using hired consultants. In addition, the number of infrared consulting services companies offering out-sourced programs has also has grown exponentially. These market segments include: Building Envelope (commercial and residential), Refractory, Boiler Evaluations, etc... These surges are driven by two main factors: 1. The low cost of investment in the equipment (the cost of cameras and peripherals continues to decline). 2. Novel marketing campaigns by the camera manufacturers who are looking to sell more cameras into an otherwise saturated market. The key characteristics of these campaigns are to over simplify the applications and understate the significances of technical training, specific skills and experience that's needed to obtain the risk-lowering information that a facility manager needs. These camera selling campaigns focuses on the simplicity of taking a thermogram, but ignores the critical factors of what it takes to actually perform and manage a creditable, valid IR program, which in-turn expose everyone to tremendous liability. As the In-house vs. Out-sourced consulting services compete for market share head to head with each other in a constricted market space, the price for out-sourced/consulting services drops to try to compete on price for more market share. The consequences of this approach are, something must be compromised to be able to stay competitive from a price point, and that compromise is the knowledge, technical skills and experience of the thermographer. This also ends up being reflected back into the skill sets of the in-house thermographer as well. This over simplification of the skill and experience is producing the "Perfect Storm" for Infrared Thermography, for both in-house and out-sourced programs.
The ISS Fluids Integrated Rack (FIR): a Summary of Capabilities
NASA Astrophysics Data System (ADS)
Gati, F.; Hill, M. E.
2002-01-01
The Fluids Integrated Rack (FIR) is a modular, multi-user scientific research facility that will fly in the U.S. laboratory module, Destiny, of the International Space Station (ISS). The FIR will be one of the two racks that will make up the Fluids and Combustion Facility (FCF) - the other being the Combustion Integrated Rack (CIR). The ISS will provide the FCF with the necessary resources, such as power and cooling. While the ISS crew will be available for experiment operations, their time will be limited. The FCF is, therefore, being designed for autonomous operations and remote control operations. Control of the FCF will be primarily through the Telescience Support Center (TSC) at the Glenn Research Center. The FCF is being designed to accommodate a wide range of combustion and fluids physics experiments within the ISS resources and constraints. The primary mission of the FIR, however, is to accommodate experiments from four major fluids physics disciplines: Complex Fluids; Multiphase Flow and Heat Transfer; Interfacial Phenomena; and Dynamics and Stability. The design of the FIR is flexible enough to accommodate experiments from other science disciplines such as Biotechnology. The FIR flexibility is a result of the large volume dedicated for experimental hardware, easily re-configurable diagnostics that allow for unique experiment configurations, and it's customizable software. The FIR will utilize six major subsystems to accommodate this broad scope of fluids physics experiments. The major subsystems are: structural, environmental, electrical, gaseous, command and data management, and imagers and illumination. Within the rack, the FIR's structural subsystem provides an optics bench type mechanical interface for the precise mounting of experimental hardware; including optical components. The back of the bench is populated with FIR avionics packages and light sources. The interior of the rack is isolated from the cabin through two rack doors that are hinged near the top and bottom of the rack. Transmission of micro-gravity disturbances to and from the rack is minimized through the Active Rack Isolation System (ARIS). The environmental subsystem will utilize air and water to remove heat generated by facility and experimental hardware. The air will be circulated throughout the rack and will be cooled by an air-water heat exchanger. Water will be used directly to cool some of the FIR components and will also be available to cool experiment hardware as required. The electrical subsystem includes the Electrical Power Control Unit (EPCU), which provides 28 VDC and 120 VDC power to the facility and the experiment hardware. The EPCU will also provide power management and control functions, as well as fault protection capabilities. The FIR will provide access to the ISS gaseous nitrogen and vacuum systems. These systems are available to support experiment operations such as the purging of experimental cells, creating flows within experimental cells and providing dry conditions where needed. The FIR Command and Data Management subsystem (CDMS) provides command and data handling for both facility and experiment hardware. The Input Output Processor (IOP) provides the overall command and data management functions for the rack including downlinking or writing data to removable drives. The IOP will also monitor the health and status of the rack subsystems. The Image Processing and Storage Units (IPSU) will perform diagnostic control and image data acquisition functions. An IPSU will be able to control a digital camera, receive image data from that camera and process/ compress image data as necessary. The Fluids Science and Avionics Package (FSAP) will provide the primary control over an experiment. The FSAP contains various computer boards/cards that will perform data and control functions. To support the imaging needs, cameras and illumination sources will be available to the investigator. Both color analog and black and white digital cameras with lenses are expected. These cameras will be capable of high resolution and, separately, frame rates up to 32,000 frames per second. Lenses for these cameras will provide both microscopic and macroscopic views. The FIR will provide two illumination sources, a 532 nm Nd:YAG laser and a white light source, both with adjustable power output. The FIR systems are being designed to maximize the amount of science that can be done on-orbit. Experiments will be designed and efficiently operated. Each individual experiment must determine the best configuration of utilizing facility capabilities and resources with augmentation of specific experiment hardware. Efficient operations will be accomplished via a combination of on-orbit physical component change-outs or processing by the crew, and software updates via ground commanding or by the crew. Careful coordination by ground and on-orbit personnel regarding the on-orbit storage and downlinking of image data will also be very important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn
Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less
The influence of the in situ camera calibration for direct georeferencing of aerial imagery
NASA Astrophysics Data System (ADS)
Mitishita, E.; Barrios, R.; Centeno, J.
2014-11-01
The direct determination of exterior orientation parameters (EOPs) of aerial images via GNSS/INS technologies is an essential prerequisite in photogrammetric mapping nowadays. Although direct sensor orientation technologies provide a high degree of automation in the process due to the GNSS/INS technologies, the accuracies of the obtained results depend on the quality of a group of parameters that models accurately the conditions of the system at the moment the job is performed. One sub-group of parameters (lever arm offsets and boresight misalignments) models the position and orientation of the sensors with respect to the IMU body frame due to the impossibility of having all sensors on the same position and orientation in the airborne platform. Another sub-group of parameters models the internal characteristics of the sensor (IOP). A system calibration procedure has been recommended by worldwide studies to obtain accurate parameters (mounting and sensor characteristics) for applications of the direct sensor orientation. Commonly, mounting and sensor characteristics are not stable; they can vary in different flight conditions. The system calibration requires a geometric arrangement of the flight and/or control points to decouple correlated parameters, which are not available in the conventional photogrammetric flight. Considering this difficulty, this study investigates the feasibility of the in situ camera calibration to improve the accuracy of the direct georeferencing of aerial images. The camera calibration uses a minimum image block, extracted from the conventional photogrammetric flight, and control point arrangement. A digital Vexcel UltraCam XP camera connected to POS AV TM system was used to get two photogrammetric image blocks. The blocks have different flight directions and opposite flight line. In situ calibration procedures to compute different sets of IOPs are performed and their results are analyzed and used in photogrammetric experiments. The IOPs from the in situ camera calibration improve significantly the accuracies of the direct georeferencing. The obtained results from the experiments are shown and discussed.
Mountainous Crater Rim on Mars
2013-10-17
This is a screen shot from a high-definition simulated movie of Mojave Crater on Mars, based on images taken by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
What you see is what you get: webcam placement influences perception and social coordination.
Thomas, Laura E; Pemstein, Daniel
2015-01-01
Building on a well-established link between elevation and social power, we demonstrate that-when perceptual information is limited-subtle visual cues can shape people's representations of others and, in turn, alter strategic social behavior. A cue to elevation (unrelated to physical size) provided by the placement of web cameras in a video chat biased individuals' perceptions of a partner's height (Experiment 1) and shaped the extent to which they made decisions in their own self-interest: participants tended to coordinate their behavior in a manner that benefitted the preferences of a partner pictured from a low camera angle during a game of asymmetric coordination (Experiment 2). Our results suggest that people are vulnerable to the influence of a limited viewpoint when forming representations of others in a manner that shapes their strategic choices.
Autonomous Rock Tracking and Acquisition from a Mars Rover
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Nesnas, Issa A.; Das, Hari
1999-01-01
Future Mars exploration missions will perform two types of experiments: science instrument placement for close-up measurement, and sample acquisition for return to Earth. In this paper we describe algorithms we developed for these tasks, and demonstrate them in field experiments using a self-contained Mars Rover prototype, the Rocky 7 rover. Our algorithms perform visual servoing on an elevation map instead of image features, because the latter are subject to abrupt scale changes during the approach. 'This allows us to compensate for the poor odometry that results from motion on loose terrain. We demonstrate the successful grasp of a 5 cm long rock over 1m away using 103-degree field-of-view stereo cameras, and placement of a flexible mast on a rock outcropping over 5m away using 43 degree FOV stereo cameras.
ASPIRE - Airborne Spectro-Polarization InfraRed Experiment
NASA Astrophysics Data System (ADS)
DeLuca, E.; Cheimets, P.; Golub, L.; Madsen, C. A.; Marquez, V.; Bryans, P.; Judge, P. G.; Lussier, L.; McIntosh, S. W.; Tomczyk, S.
2017-12-01
Direct measurements of coronal magnetic fields are critical for taking the next step in active region and solar wind modeling and for building the next generation of physics-based space-weather models. We are proposing a new airborne instrument to make these key observations. Building on the successful Airborne InfraRed Spectrograph (AIR-Spec) experiment for the 2017 eclipse, we will design and build a spectro-polarimeter to measure coronal magnetic field during the 2019 South Pacific eclipse. The new instrument will use the AIR-Spec optical bench and the proven pointing, tracking, and stabilization optics. A new cryogenic spectro-polarimeter will be built focusing on the strongest emission lines observed during the eclipse. The AIR-Spec IR camera, slit jaw camera and data acquisition system will all be reused. The poster will outline the optical design and the science goals for ASPIRE.
NASA Astrophysics Data System (ADS)
Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito
2017-07-01
The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).
1973-09-01
This Earth Resource Experiment Package (EREP) photograph of the Uncompahgre area of Colorado was electronically acquired in September of 1973 by the Multi-spectral Scarner, Skylab Experiment S192. EREP images were used to analyze the vegetation conditions and landscape characteristic of this area. Skylab's Earth sensors played the dual roles of gathering information about the planet and perfecting instruments and techniques for future satellites and manned stations. An array of six fixed cameras, another for high resolution, and the astronauts' handheld cameras photographed surface features. Other instruments, recording on magnetic tape, measured the reflectivity of plants, soils, and water. Radar measured the altitude of land and water surfaces. The sensors' objectives were to survey croplands and forests, identify soils and rock types, map natural features and urban developments, detect sediments and the spread of pollutants, study clouds and the sea, and determine the extent of snow and ice cover.
Measuring SO2 ship emissions with an ultraviolet imaging camera
NASA Astrophysics Data System (ADS)
Prata, A. J.
2014-05-01
Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Comparison of experimental three-band IR detection of buried objects and multiphysics simulations
NASA Astrophysics Data System (ADS)
Rabelo, Renato C.; Tilley, Heather P.; Catterlin, Jeffrey K.; Karunasiri, Gamani; Alves, Fabio D. P.
2018-04-01
A buried-object detection system composed of a LWIR, a MWIR and a SWIR camera, along with a set of ground and ambient temperature sensors was constructed and tested. The objects were buried in a 1.2x1x0.3 m3 sandbox and surface temperature (using LWIR and MWIR cameras) and reflection (using SWIR camera) were recoded throughout the day. Two objects (aluminum and Teflon) with volume of about 2.5x10-4 m3 , were placed at varying depths during the measurements. Ground temperature sensors buried at three different depths measured the vertical temperature profile within the sandbox, while the weather station recorded the ambient temperature and solar radiation intensity. Images from the three cameras were simultaneously acquired in five-minute intervals throughout many days. An algorithm to postprocess and combine the images was developed in order to maximize the probability of detection by identifying thermal anomalies (temperature contrast) resulting from the presence of the buried object in an otherwise homogeneous medium. A simplified detection metric based on contrast differences was established to allow the evaluation of the image processing method. Finite element simulations were performed, reproducing the experiment conditions and, when possible, incorporated with data coming from actual measurements. Comparisons between experiment and simulation results were performed and the simulation parameters were adjusted until images generated from both methods are matched, aiming at obtaining insights of the buried material properties. Preliminary results show a great potential for detection of shallowburied objects such as land mines and IEDs and possible identification using finite element generated maps fitting measured surface maps.
Optimal energy-splitting method for an open-loop liquid crystal adaptive optics system.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Liu, Yonggang; Peng, Zenghui; Yang, Qingyun; Meng, Haoran; Yao, Lishuang; Xuan, Li
2012-08-13
A waveband-splitting method is proposed for open-loop liquid crystal adaptive optics systems (LC AOSs). The proposed method extends the working waveband, splits energy flexibly, and improves detection capability. Simulated analysis is performed for a waveband in the range of 350 nm to 950 nm. The results show that the optimal energy split is 7:3 for the wavefront sensor (WFS) and for the imaging camera with the waveband split into 350 nm to 700 nm and 700 nm to 950 nm, respectively. A validation experiment is conducted by measuring the signal-to-noise ratio (SNR) of the WFS and the imaging camera. The results indicate that for the waveband-splitting method, the SNR of WFS is approximately equal to that of the imaging camera with a variation in the intensity. On the other hand, the SNR of the WFS is significantly different from that of the imaging camera for the polarized beam splitter energy splitting scheme. Therefore, the waveband-splitting method is more suitable for an open-loop LC AOS. An adaptive correction experiment is also performed on a 1.2-meter telescope. A star with a visual magnitude of 4.45 is observed and corrected and an angular resolution ability of 0.31″ is achieved. A double star with a combined visual magnitude of 4.3 is observed as well, and its two components are resolved after correction. The results indicate that the proposed method can significantly improve the detection capability of an open-loop LC AOS.
Ghosh, Debashis; Michalopoulos, Nikolaos V; Davidson, Timothy; Wickham, Fred; Williams, Norman R; Keshtgar, Mohammed R
2017-04-01
Access to nuclear medicine department for sentinel node imaging remains an issue in number of hospitals in the UK and many parts of the world. Sentinella ® is a portable imaging camera used intra-operatively to produce real time visual localisation of sentinel lymph nodes. Sentinella ® was tested in a controlled laboratory environment at our centre and we report our experience on the first use of this technology from UK. Moreover, preoperative scintigrams of the axilla were obtained in 144 patients undergoing sentinel node biopsy using conventional gamma camera. Sentinella ® scans were done intra-operatively to correlate with the pre-operative scintigram and to determine presence of any residual hot node after the axilla was deemed to be clear based on the silence of the hand held gamma probe. Sentinella ® detected significantly more nodes compared with CGC (p < 0.0001). Sentinella ® picked up extra nodes in 5/144 cases after the axilla was found silent using hand held gamma probe. In 2/144 cases, extra nodes detected by Sentinella ® confirmed presence of tumour cells that led to a complete axillary clearance. Sentinella ® is a reliable technique for intra-operative localisation of radioactive nodes. It provides increased nodal visualisation rates compared to static scintigram imaging and proves to be an important tool for harvesting all hot sentinel nodes. This portable gamma camera can definitely replace the use of conventional lymphoscintigrams saving time and money both for patients and the health system. Copyright © 2016 Elsevier Ltd. All rights reserved.
2016-05-24
ISS047e132751 (05/24/2016) --- Russian cosmonaut Oleg Skripochka readies a high power camera for the DUBRAVA experiment, which is testing methods for tracking natural and man-made impacts on forest cover from the International Space Station. It will use both visual and spectrometric tools to monitor at first with the potential for adding hyperspectral and infrared equipment in the future.