The Effects of Radiation on Imagery Sensors in Space
NASA Technical Reports Server (NTRS)
Mathis, Dylan
2007-01-01
Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
The High Definition Earth Viewing (HDEV) Payload
NASA Technical Reports Server (NTRS)
Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris
2017-01-01
The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.
Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L
2008-09-01
The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.
2014-05-07
View of the High Definition Earth Viewing (HDEV) flight assembly installed on the exterior of the Columbus European Laboratory module. Image was released by astronaut on Twitter. The High Definition Earth Viewing (HDEV) experiment places four commercially available HD cameras on the exterior of the space station and uses them to stream live video of Earth for viewing online. The cameras are enclosed in a temperature specific housing and are exposed to the harsh radiation of space. Analysis of the effect of space on the video quality, over the time HDEV is operational, may help engineers decide which cameras are the best types to use on future missions. High school students helped design some of the cameras' components, through the High Schools United with NASA to Create Hardware (HUNCH) program, and student teams operate the experiment.
High-definition television evaluation for remote handling task performance
NASA Astrophysics Data System (ADS)
Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J. V.; Herndon, J. N.
Described are experiments designed to evaluate the impact of HDTV (High-Definition Television) on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore spectacles with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.
Astronauts Ashby and Coleman practice with High Definition Video Camera
1999-04-21
S99-05085 (April 1999) --- In preparation for a STS-93 detailed test objective (DTO), astronauts Jeffrey S. Ashby, pilot, and Catherine G. (Cady) Coleman, mission specialist, train with a high-definition television camcorder. The camera will be carried onboard the Space Shuttle Columbia for their scheduled July mission. The rehearsal with the DTO 700-17A hardware took place in the Crew Compartment Trainer (CCT)in the Systems Integration Facility at the Johnson Space Center (JSC).
4K x 2K pixel color video pickup system
NASA Astrophysics Data System (ADS)
Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou
1998-12-01
This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.
Karimov, Jamshid H; Horvath, David; Sunagawa, Gengo; Byram, Nicole; Moazami, Nader; Golding, Leonard A R; Fukamachi, Kiyotaka
2015-12-01
Post-explant evaluation of the continuous-flow total artificial heart in preclinical studies can be extremely challenging because of the device's unique architecture. Determining the exact location of tissue regeneration, neointima formation, and thrombus is particularly important. In this report, we describe our first successful experience with visualizing the Cleveland Clinic continuous-flow total artificial heart using a custom-made high-definition miniature camera.
1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second
NASA Astrophysics Data System (ADS)
Glenn, William E.; Marcinka, John W.
1998-09-01
For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
NASA Astrophysics Data System (ADS)
Tanada, Jun
1992-08-01
Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.
NASA Technical Reports Server (NTRS)
Grubbs, Rodney
2016-01-01
The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.
The role of three-dimensional high-definition laparoscopic surgery for gynaecology.
Usta, Taner A; Gundogdu, Elif C
2015-08-01
This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.
Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan
2018-06-13
To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.
Yoshida, Eriko; Terada, Shin-Ichiro; Tanaka, Yasuyo H; Kobayashi, Kenta; Ohkura, Masamichi; Nakai, Junichi; Matsuzaki, Masanori
2018-05-29
In vivo wide-field imaging of neural activity with a high spatio-temporal resolution is a challenge in modern neuroscience. Although two-photon imaging is very powerful, high-speed imaging of the activity of individual synapses is mostly limited to a field of approximately 200 µm on a side. Wide-field one-photon epifluorescence imaging can reveal neuronal activity over a field of ≥1 mm 2 at a high speed, but is not able to resolve a single synapse. Here, to achieve a high spatio-temporal resolution, we combine an 8 K ultra-high-definition camera with spinning-disk one-photon confocal microscopy. This combination allowed us to image a 1 mm 2 field with a pixel resolution of 0.21 µm at 60 fps. When we imaged motor cortical layer 1 in a behaving head-restrained mouse, calcium transients were detected in presynaptic boutons of thalamocortical axons sparsely labeled with GCaMP6s, although their density was lower than when two-photon imaging was used. The effects of out-of-focus fluorescence changes on calcium transients in individual boutons appeared minimal. Axonal boutons with highly correlated activity were detected over the 1 mm 2 field, and were probably distributed on multiple axonal arbors originating from the same thalamic neuron. This new microscopy with an 8 K ultra-high-definition camera should serve to clarify the activity and plasticity of widely distributed cortical synapses.
Streak camera receiver definition study
NASA Technical Reports Server (NTRS)
Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.
1990-01-01
Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.
Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana
2015-10-01
To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.
Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana
2015-01-01
Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001
Mountainous Crater Rim on Mars
2013-10-17
This is a screen shot from a high-definition simulated movie of Mojave Crater on Mars, based on images taken by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
NASA Technical Reports Server (NTRS)
Tarbell, Theodore D.
1993-01-01
Technical studies of the feasibility of balloon flights of the former Spacelab instrument, the Solar Optical Universal Polarimeter, with a modern charge-coupled device (CCD) camera, to study the structure and evolution of solar active regions at high resolution, are reviewed. In particular, different CCD cameras were used at ground-based solar observatories with the SOUP filter, to evaluate their performance and collect high resolution images. High resolution movies of the photosphere and chromosphere were successfully obtained using four different CCD cameras. Some of this data was collected in coordinated observations with the Yohkoh satellite during May-July, 1992, and they are being analyzed scientifically along with simultaneous X-ray observations.
Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.
Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki
2017-01-01
Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.
Toy, Dustin L.; Roche, Erin; Dovichin, Colin M.
2017-01-01
Many bird species of conservation concern have behavioral or morphological traits that make it difficult for researchers to determine if the birds have been uniquely marked. Those traits can also increase the difficulty for researchers to decipher those markers. As a result, it is a priority for field biologists to develop time- and cost-efficient methods to resight uniquely marked individuals, especially when efforts are spread across multiple States and study areas. The Interior Least Tern (Sternula antillarum athalassos) is one such difficult-to-resight species; its tendency to mob perceived threats, such as observing researchers, makes resighting marked individuals difficult without physical recapture. During 2015, uniquely marked adult Interior Least Terns were resighted and identified by small, inexpensive, high-definition portable video cameras deployed for 29-min periods adjacent to nests. Interior Least Tern individuals were uniquely identified 84% (n = 277) of the time. This method also provided the ability to link individually marked adults to a specific nest, which can aid in generational studies and understanding heritability for difficult-to-resight species. Mark-recapture studies on such species may be prone to sparse encounter data that can result in imprecise or biased demographic estimates and ultimately flawed inferences. High-definition video cameras may prove to be a robust method for generating reliable demographic estimates.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
An Acoustic Charge Transport Imager for High Definition Television
NASA Technical Reports Server (NTRS)
Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard
1999-01-01
This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.
Electronic magnification for astronomical camera tubes
NASA Technical Reports Server (NTRS)
Vine, J.; Hansen, J. R.; Pietrzyk, J. P.
1974-01-01
Definitions, test schemes, and analyses used to provide variable magnification in the image section of the television sensor for large space telescopes are outlined. Experimental results show a definite form of magnetic field distribution is necessary to achieve magnification in the range 3X to 4X. Coil systems to establish the required field shapes were built, and both image intensifiers and camera tubes were operated at high magnification. The experiments confirm that such operation is practical and can provide satisfactory image quality. The main problem with such a system was identified as heating of the photocathode due to concentration of coil power dissipation in that vicinity. Suggestions for overcoming this disadvantage are included.
4K Video of Colorful Liquid in Space
2015-10-09
Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Astronaut Walz on flight deck with IMAX camera
1996-11-04
STS079-362-023 (16-26 Sept. 1996) --- Astronaut Carl E. Walz, mission specialist, positions the IMAX camera for a shoot on the flight deck of the Space Shuttle Atlantis. The IMAX project is a collaboration among NASA, the Smithsonian Institution's National Air and Space Museum, IMAX Systems Corporation and the Lockheed Corporation to document in motion picture format significant space activities and promote NASA's educational goals using the IMAX film medium. This system, developed by IMAX of Toronto, uses specially designed 65mm cameras and projectors to record and display very high definition color motion pictures which, accompanied by six-channel high fidelity sound, are displayed on screens in IMAX and OMNIMAX theaters that are up to ten times larger than a conventional screen, producing a feeling of "being there." The 65mm photography is transferred to 70mm motion picture films for showing in IMAX theaters. IMAX cameras have been flown on 14 previous missions.
Spacewalking_in_Ultra_High_Definition
2017-07-21
Ever wonder what the spacewalker sees while you’re looking at him or her? Here’s your answer, courtesy of NASA astronaut Jack Fischer. This Ultra High Definition clip shows Fischer outside the International Space Station during a spacewalk on Expedition 51 in May 2017, and the view from a small camera attached to his spacesuit at the same time. Music by Joakim Karud. _______________________________________ FOLLOW THE SPACE STATION! Twitter: https://twitter.com/Space_Station Facebook: https://www.facebook.com/ISS Instagram: https://instagram.com/iss/
Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus
2014-01-01
To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.
High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.
2000-01-01
As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.
NASA Astrophysics Data System (ADS)
Risteiu, M.; Lorincz, A.; Dobra, R.; Dasic, P.; Andras, I.; Roventa, M.
2017-06-01
The proposed paper shows some experimental results of a research in metallic structures inspection by using a high definition camera controller by high processing capabilities. The dedicated ARM Cortex-M4 initializes the ARM Cortex-M0 system for image acquiring. Then, by programming options, we are action for patterns (abnormal situations like metal cracks, or discontinuities) types and tuning, for enabling overexposure highlighting and adjusting camera brightness/exposure, to adjust minimum brightness, and to adjust the pattern’s teach threshold. The proposed system has been tested in normal lighting conditions from the typical site.
High dynamic range CMOS (HDRC) imagers for safety systems
NASA Astrophysics Data System (ADS)
Strobel, Markus; Döttling, Dietmar
2013-04-01
The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
2012-06-08
Definitions Importantly, as an operational definition of ‘social media,’ I include Facebook, Twitter, YouTube, and social networking sites not specifically...the aforementioned social networking sites . As an operational definition of ‘security operations’ for the purposes of this paper, I use the...the existence of camera phones, Facebook, Twitter, and other social networking sites , individuals’ behavior changed with the advent of the Internet
NASA Astrophysics Data System (ADS)
Waltham, N.; Beardsley, S.; Clapp, M.; Lang, J.; Jerram, P.; Pool, P.; Auker, G.; Morris, D.; Duncan, D.
2017-11-01
Solar Dynamics Observatory (SDO) is imaging the Sun in many wavelengths near simultaneously and with a resolution ten times higher than the average high-definition television. In this paper we describe our innovative systems approach to the design of the CCD cameras for two of SDO's remote sensing instruments, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). Both instruments share use of a custom-designed 16 million pixel science-grade CCD and common camera readout electronics. A prime requirement was for the CCD to operate with significantly lower drive voltages than before, motivated by our wish to simplify the design of the camera readout electronics. Here, the challenge lies in the design of circuitry to drive the CCD's highly capacitive electrodes and to digitize its analogue video output signal with low noise and to high precision. The challenge is greatly exacerbated when forced to work with only fully space-qualified, radiation-tolerant components. We describe our systems approach to the design of the AIA and HMI CCD and camera electronics, and the engineering solutions that enabled us to comply with both mission and instrument science requirements.
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
Low-cost, high-performance and efficiency computational photometer design
NASA Astrophysics Data System (ADS)
Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly
2014-05-01
Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.
Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.
Video Capture of Perforator Flap Harvesting Procedure with a Full High-definition Wearable Camera
2016-01-01
Summary: Recent advances in wearable recording technology have enabled high-quality video recording of several surgical procedures from the surgeon’s perspective. However, the available wearable cameras are not optimal for recording the harvesting of perforator flaps because they are too heavy and cannot be attached to the surgical loupe. The Ecous is a small high-resolution camera that was specially developed for recording loupe magnification surgery. This study investigated the use of the Ecous for recording perforator flap harvesting procedures. The Ecous SC MiCron is a high-resolution camera that can be mounted directly on the surgical loupe. The camera is light (30 g) and measures only 28 × 32 × 60 mm. We recorded 23 perforator flap harvesting procedures with the Ecous connected to a laptop through a USB cable. The elevated flaps included 9 deep inferior epigastric artery perforator flaps, 7 thoracodorsal artery perforator flaps, 4 anterolateral thigh flaps, and 3 superficial inferior epigastric artery flaps. All procedures were recorded with no equipment failure. The Ecous recorded the technical details of the perforator dissection at a high-resolution level. The surgeon did not feel any extra stress or interference when wearing the Ecous. The Ecous is an ideal camera for recording perforator flap harvesting procedures. It fits onto the surgical loupe perfectly without creating additional stress on the surgeon. High-quality video from the surgeon’s perspective makes accurate documentation of the procedures possible, thereby enhancing surgical education and allowing critical self-reflection. PMID:27482504
Developing a Low-Cost System for 3d Data Acquisition
NASA Astrophysics Data System (ADS)
Kossieris, S.; Kourounioti, O.; Agrafiotis, P.; Georgopoulos, A.
2017-11-01
In this paper, a developed low-cost system is described, which aims to facilitate 3D documentation fast and reliably by acquiring the necessary data in outdoor environment for the 3D documentation of façades especially in the case of very narrow streets. In particular, it provides a viable solution for buildings up to 8-10m high and streets as narrow as 2m or even less. In cases like that, it is practically impossible or highly time-consuming to acquire images in a conventional way. This practice would lead to a huge number of images and long processing times. The developed system was tested in the narrow streets of a medieval village on the Greek island of Chios. There, in order to by-pass the problem of short taking distances, it was thought to use high definition action cameras together with a 360˚ camera, which are usually provided with very wide-angle lenses and are capable of acquiring images, of high definition, are rather cheap and, most importantly, extremely light. Results suggest that the system can perform fast 3D data acquisition adequate for deliverables of high quality.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Knuth, F.; Marburg, A.
2016-12-01
A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.
An integrated port camera and display system for laparoscopy.
Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E
2010-05-01
In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
2007-09-01
the projective camera matrix (P) which is a 3x4 matrix that is represents both the intrinsic and extrinsic parameters of a camera. It is used to...K contains the intrinsic parameters of the camera and |R t⎡ ⎤⎣ ⎦ represents the extrinsic parameters of the camera. By definition, the extrinsic ... extrinsic parameters are known then the camera is said to be calibrated. If only the intrinsic parameters are known, then the projective camera can
Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.
Optimal UAV Path Planning for Tracking a Moving Ground Vehicle with a Gimbaled Camera
2014-03-27
micro SD card slot to record all video taken at 1080P resolution. This feature allows the team to record the high definition video taken by the...Inequality constraints 64 h=[]; %Equality constraints 104 Bibliography 1. “ DIY Drones: Official ArduPlane Repository”, 2013. URL https://code
NASA Technical Reports Server (NTRS)
1976-01-01
Development of the F/48, F/96 Planetary Camera for the Large Space Telescope is discussed. Instrument characteristics, optical design, and CCD camera submodule thermal design are considered along with structural subsystem and thermal control subsystem. Weight, electrical subsystem, and support equipment requirements are also included.
Surgical video recording with a modified GoPro Hero 4 camera
Lin, Lily Koo
2016-01-01
Background Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. PMID:26834455
Surgical video recording with a modified GoPro Hero 4 camera.
Lin, Lily Koo
2016-01-01
Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.
Imaging System for Vaginal Surgery.
Taylor, G Bernard; Myers, Erinn M
2015-12-01
The vaginal surgeon is challenged with performing complex procedures within a surgical field of limited light and exposure. The video telescopic operating microscope is an illumination and imaging system that provides visualization during open surgical procedures with a limited field of view. The imaging system is positioned within the surgical field and then secured to the operating room table with a maneuverable holding arm. A high-definition camera and Xenon light source allow transmission of the magnified image to a high-definition monitor in the operating room. The monitor screen is positioned above the patient for the surgeon and assistants to view real time throughout the operation. The video telescopic operating microscope system was used to provide surgical illumination and magnification during total vaginal hysterectomy and salpingectomy, midurethral sling, and release of vaginal scar procedures. All procedures were completed without complications. The video telescopic operating microscope provided illumination of the vaginal operative field and display of the magnified image onto high-definition monitors in the operating room for the surgeon and staff to simultaneously view the procedures. The video telescopic operating microscope provides high-definition display, magnification, and illumination during vaginal surgery.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
NASA Astrophysics Data System (ADS)
Sobue, Shinichi; Yamazaki, Junichi; Matsumoto, Shuichi; Konishi, Hisahiro; Maejima, Hironori; Sasaki, Susumu; Kato, Manabu; Mitsuhashi, Seiji; Tachino, Junichi
The lunar explorer SELENE (also called KAGUYA) carried thirteen scientific mission instruments to reveal the origin and evolution of Moon and to investigate the possible future utilization of Moon. In addition to the scientific instruments, a high-definition TV (HDTV) camera provided by the Japan Broadcasting Corporation (NHK) was carried on KAGUYA to promote public outreach. We usually use housekeeping telemetry data to derive the satellite attitude along with orbital determination and propagated information. However, it takes time to derive this information, since orbital determination and propagation calculation require the use of the orbital model. When a malfunction of the KAGUYA reaction wheel occurred, we could not have correct attitude information. This means that we don’t have a correct orbital determination in timely fashion. However, when we checked HDTV movies, we found that horizon information on the lunar surface derived from HDTV moving images as a horizon sensor was very useful for the detection of the attitude of KAGUYA. We then compared this information with the attitude information derived from orbital telemetry to validate the accuracy of the HDTV derived estimation. As a result of this comparison, there are good pitch attitude estimation using HDTV derived estimation and we could estimate the pitch angle change during the KAGUYA mission operation simplify and quickly. In this study, we show the usefulness of this HDTV camera as a horizon sensor.
Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong
2017-02-15
Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.
Full-Scale Passive Earth Entry Vehicle Landing Tests: Methods and Measurements
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Kellas, Sotiris
2018-01-01
During the summer of 2016, a series of drop tests were conducted on two passive earth entry vehicle (EEV) test articles at the Utah Test and Training Range (UTTR). The tests were conducted to evaluate the structural integrity of a realistic EEV vehicle under anticipated landing loads. The test vehicles were lifted to an altitude of approximately 400m via a helicopter and released via release hook into a predesignated 61 m landing zone. Onboard accelerometers were capable of measuring vehicle free flight and impact loads. High-speed cameras on the ground tracked the free-falling vehicles and data was used to calculate critical impact parameters during the final seconds of flight. Additional sets of high definition and ultra-high definition cameras were able to supplement the high-speed data by capturing the release and free flight of the test articles. Three tests were successfully completed and showed that the passive vehicle design was able to withstand the impact loads from nominal and off-nominal impacts at landing velocities of approximately 29 m/s. Two out of three test resulted in off-nominal impacts due to a combination of high winds at altitude and the method used to suspend the vehicle from the helicopter. Both the video and acceleration data captured is examined and discussed. Finally, recommendations for improved release and instrumentation methods are presented.
2015-06-01
Atmospheric Administration (NOAA) at the “ Ordnance Reef” site off of Waianae, Hawaii by divers using a hand-held high -definition video (HDV) camera...for the Ordnance Reef dataset as well, though less dramatic than the Miami data because the 2-D results were high to begin with at Ordnance Reef...generally > 80% accuracy. Discrimination of environments was high for the major seabed types. For example, sand and mixed sand-seagrass were classified with
A portable high-definition electronic endoscope based on embedded system
NASA Astrophysics Data System (ADS)
Xu, Guang; Wang, Liqiang; Xu, Jin
2012-11-01
This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.
Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur
2018-01-01
Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.
NASA Astrophysics Data System (ADS)
Takahashi, Yukihiro; Sato, Mitsuteru; Imai, Masataka; Lorenz, Ralph; Yair, Yoav; Aplin, Karen; Fischer, Georg; Nakamura, Masato; Ishii, Nobuaki; Abe, Takumi; Satoh, Takehiko; Imamura, Takeshi; Hirose, Chikako; Suzuki, Makoto; Hashimoto, George L.; Hirata, Naru; Yamazaki, Atsushi; Sato, Takao M.; Yamada, Manabu; Murakami, Shin-ya; Yamamoto, Yukio; Fukuhara, Tetsuya; Ogohara, Kazunori; Ando, Hiroki; Sugiyama, Ko-ichiro; Kashimura, Hiroki; Ohtsuki, Shoko
2018-05-01
The existence of lightning discharges in the Venus atmosphere has been controversial for more than 30 years, with many positive and negative reports published. The lightning and airglow camera (LAC) onboard the Venus orbiter, Akatsuki, was designed to observe the light curve of possible flashes at a sufficiently high sampling rate to discriminate lightning from other sources and can thereby perform a more definitive search for optical emissions. Akatsuki arrived at Venus during December 2016, 5 years following its launch. The initial operations of LAC through November 2016 have included a progressive increase in the high voltage applied to the avalanche photodiode detector. LAC began lightning survey observations in December 2016. It was confirmed that the operational high voltage was achieved and that the triggering system functions correctly. LAC lightning search observations are planned to continue for several years.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
25 CFR 543.2 - What are the definitions for this part?
Code of Federal Regulations, 2013 CFR
2013-04-01
..., mechanical, or other technologic form, that function together to aid the play of one or more Class II games... a particular game, player interface, shift, or other period. Count room. A secured room where the... validated directly by a voucher system. Dedicated camera. A video camera that continuously records a...
Anderson, Adam L; Lin, Bingxiong; Sun, Yu
2013-12-01
This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.
View of Los Angeles, California area
1975-07-16
AST-14-881 (16 July 1975) --- An excellent view of Los Angeles, California, as photographed from the Apollo spacecraft in Earth orbit during the joint U.S.-USSR Apollo-Soyuz Test Project (ASTP) mission. Downtown Los Angeles is near the center of the picture. The photograph was taken at an altitude of 193 kilometers (120 statute miles), with a 70mm Hasselblad camera using SO-242 high-definition Ektachrome film.
Lindsay, Joseph; McLean, J Allen; Bains, Amrita; Ying, Tom; Kuo, M H
2013-01-01
Computer devices using touch-enabled technology are becoming more prevalent today. The application of a touch screen high definition surgical monitor could allow not only high definition video from an endoscopic camera to be displayed, but also the display and interaction with relevant patient and health related data. However, this technology has not been quickly embraced by all health care organizations. Although traditional keyboard or mouse-based software programs may function flawlessly on a touch-based device, many are not practical due to the usage of small buttons, fonts and very complex menu systems. This paper describes an approach taken to overcome these problems. A real case study was used to demonstrate the novelty and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Groch, A.; Seitel, A.; Hempel, S.; Speidel, S.; Engelbrecht, R.; Penne, J.; Höller, K.; Röhl, S.; Yung, K.; Bodenstedt, S.; Pflaum, F.; dos Santos, T. R.; Mersmann, S.; Meinzer, H.-P.; Hornegger, J.; Maier-Hein, L.
2011-03-01
One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patient's anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organ's surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.
Report of the facility definition team spacelab UV-Optical Telescope Facility
NASA Technical Reports Server (NTRS)
1975-01-01
Scientific requirements for the Spacelab Ultraviolet-Optical Telescope (SUOT) facility are presented. Specific programs involving high angular resolution imagery over wide fields, far ultraviolet spectroscopy, precisely calibrated spectrophotometry and spectropolarimetry over a wide wavelength range, and planetary studies, including high resolution synoptic imagery, are recommended. Specifications for the mounting configuration, instruments for the mounting configuration, instrument mounting system, optical parameters, and the pointing and stabilization system are presented. Concepts for the focal plane instruments are defined. The functional requirements of the direct imaging camera, far ultraviolet spectrograph, and the precisely calibrated spectrophotometer are detailed, and the planetary camera concept is outlined. Operational concepts described in detail are: the makeup and functions of shuttle payload crew, extravehicular activity requirements, telescope control and data management, payload operations control room, orbital constraints, and orbital interfaces (stabilization, maneuvering requirements and attitude control, contamination, utilities, and payload weight considerations).
Infrared engineering for the advancement of science: A UK perspective
NASA Astrophysics Data System (ADS)
Baker, Ian M.
2017-02-01
Leonardo MW (formerly Selex ES) has been developing infrared sensors and cameras for over 62 years at two main sites at Southampton and Basildon. Funding mainly from UK MOD has seen the technology progress from single element PbSe sensors to advanced, high definition, HgCdTe cameras, widely deployed in many fields today. However, in the last 10 years the major challenges and research funding has come from projects within the scientific sphere, particularly: astronomy and space. Low photon flux, high resolution spectroscopy and fast frame rates are the motivation to drive the sensitivity of infrared detectors to the single photon level. These detectors make use of almost noiseless avalanche gain in HgCdTe to achieve the sensitivity and speed of response. Metal Organic Vapour Phase Epitaxy, MOVPE, grown on low-cost GaAs substrates, provides the capability for crucial bandgap engineering to suppress breakdown currents and allow high avalanche gain even in very low background conditions. This paper describes the progress so far and provides a glimpse of the future.
NASA Astrophysics Data System (ADS)
Nishiyama, T.; Kataoka, J.; Kishimoto, A.; Fujita, T.; Iwamoto, Y.; Taya, T.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Sakurai, N.; Adachi, S.; Uchiyama, T.
2014-12-01
After the Japanese nuclear disaster in 2011, large amounts of radioactive isotopes were released and still remain a serious problem in Japan. Consequently, various gamma cameras are being developed to help identify radiation hotspots and ensure effective decontamination operation. The Compton camera utilizes the kinematics of Compton scattering to contract images without using a mechanical collimator, and features a wide field of view. For instance, we have developed a novel Compton camera that features a small size (13 × 14 × 15 cm3) and light weight (1.9 kg), but which also achieves high sensitivity thanks to Ce:GAGG scintillators optically coupled wiith MPPC arrays. By definition, in such a Compton camera, gamma rays are expected to scatter in the ``scatterer'' and then be fully absorbed in the ``absorber'' (in what is called a forward-scattered event). However, high energy gamma rays often interact with the detector in the opposite direction - initially scattered in the absorber and then absorbed in the scatterer - in what is called a ``back-scattered'' event. Any contamination of such back-scattered events is known to substantially degrade the quality of gamma-ray images, but determining the order of gamma-ray interaction based solely on energy deposits in the scatterer and absorber is quite difficult. For this reason, we propose a novel yet simple Compton camera design that includes a rear-panel shield (a few mm thick) consisting of W or Pb located just behind the scatterer. Since the energy of scattered gamma rays in back-scattered events is much lower than that in forward-scattered events, we can effectively discriminate and reduce back-scattered events to improve the signal-to-noise ratio in the images. This paper presents our detailed optimization of the rear-panel shield using Geant4 simulation, and describes a demonstration test using our Compton camera.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
ISS Expedition 53 U.S. Spacewalk 46
2017-10-20
Outside the International Space Station, Expedition 53 Commander Randy Bresnik and Flight Engineer Joe Acaba of NASA conducted a spacewalk Oct. 20 to continue upgrades to and maintenance of station hardware. It was the third spacewalk in two weeks for Expedition 53 crewmembers outside the Quest airlock. During the excursion, Bresnik and Acaba replaced a failed camera light on the new Latching End Effector “hand” on the Canadarm2 robotic arm, installed a new high definition camera on the starboard truss of the complex, replaced a fuse on the Dextre Special Dexterous Manipulator attachment for the arm and removed thermal blankets from two spare electrical routing units for future robotic replacement work, if required. It was the fifth spacewalk in Bresnik’s career and the third for Acaba.
Hayashida, Tetsuya; Iwasaki, Hiroaki; Masaoka, Kenichiro; Shimizu, Masanori; Yamashita, Takayuki; Iwai, Wataru
2017-06-26
We selected appropriate indices for color rendition and determined their recommended values for ultra-high-definition television (UHDTV) production using white LED lighting. Since the spectral sensitivities of UHDTV cameras can be designed to approximate the ideal spectral sensitivities of UHDTV colorimetry, they have more accurate color reproduction than HDTV cameras, and thus the color-rendering properties of the lighting are critical. Comparing images taken under white LEDs with conventional color rendering indices (R a , R 9-14 ) and recently proposed methods for evaluating color rendition of CQS, TM-30, Q a , and SSI, we found the combination of R a and R 9 appropriate. For white LED lighting, R a ≥ 90 and R 9 ≥ 80 are recommended for UHDTV production.
NASA Astrophysics Data System (ADS)
Seo, Hokuto; Aihara, Satoshi; Namba, Masakazu; Watabe, Toshihisa; Ohtake, Hiroshi; Kubota, Misao; Egami, Norifumi; Hiramatsu, Takahiro; Matsuda, Tokiyoshi; Furuta, Mamoru; Nitta, Hiroshi; Hirao, Takashi
2010-01-01
Our group has been developing a new type of image sensor overlaid with three organic photoconductive films, which are individually sensitive to only one of the primary color components (blue (B), green (G), or red (R) light), with the aim of developing a compact, high resolution color camera without any color separation optical systems. In this paper, we firstly revealed the unique characteristics of organic photoconductive films. Only choosing organic materials can tune the photoconductive properties of the film, especially excellent wavelength selectivities which are good enough to divide the incident light into three primary colors. Color separation with vertically stacked organic films was also shown. In addition, the high-resolution of organic photoconductive films sufficient for high-definition television (HDTV) was confirmed in a shooting experiment using a camera tube. Secondly, as a step toward our goal, we fabricated a stacked organic image sensor with G- and R-sensitive organic photoconductive films, each of which had a zinc oxide (ZnO) thin film transistor (TFT) readout circuit, and demonstrated image pickup at a TV frame rate. A color image with a resolution corresponding to the pixel number of the ZnO TFT readout circuit was obtained from the stacked image sensor. These results show the potential for the development of high-resolution prism-less color cameras with stacked organic photoconductive films.
NASA Technical Reports Server (NTRS)
Tarbell, Theodore D.; Topka, Kenneth P.
1992-01-01
The definition phase of a scientific study of active regions on the sun by balloon flight of a former Spacelab instrument, the Solar Optical Universal Polarimeter (SOUP) is described. SOUP is an optical telescope with image stabilization, tunable filter and various cameras. After the flight phase of the program was cancelled due to budgetary problems, scientific and engineering studies relevant to future balloon experiments of this type were completed. High resolution observations of the sun were obtained using SOUP components at the Swedish Solar Observatory in the Canary Islands. These were analyzed and published in studies of solar magnetic fields and active regions. In addition, testing of low-voltage piezoelectric transducers was performed, which showed they were appropriate for use in image stabilization on a balloon.
Calibration of a dual-PTZ camera system for stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2010-08-01
In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.
MPCM: a hardware coder for super slow motion video sequences
NASA Astrophysics Data System (ADS)
Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.
2013-12-01
In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.
Development of high definition OCT system for clinical therapy of skin diseases
NASA Astrophysics Data System (ADS)
Baek, Daeyul; Seo, Young-Seok; Kim, Jung-Hyun
2018-02-01
OCT is a non-invasive imaging technique that can be applied to diagnose various skin disease. Since its introduction in 1997, dermatology has used OCT technology to obtain high quality images of human skin. Recently, in order to accurately diagnose skin diseases, it is essential to develop OCT equipment that can obtain high quality images. Therefore, we developed the system that can obtain a high quality image by using a 1300 nm light source with a wide bandwidth and deep penetration depth, high-resolution image, and a camera capable of high sensitivity and high speed processing. We introduce the performance of the developed system and the clinical application data.
An Insect Eye Inspired Miniaturized Multi-Camera System for Endoscopic Imaging.
Cogal, Omer; Leblebici, Yusuf
2017-02-01
In this work, we present a miniaturized high definition vision system inspired by insect eyes, with a distributed illumination method, which can work in dark environments for proximity imaging applications such as endoscopy. Our approach is based on modeling biological systems with off-the-shelf miniaturized cameras combined with digital circuit design for real time image processing. We built a 5 mm radius hemispherical compound eye, imaging a 180 ° ×180 ° degrees field of view while providing more than 1.1 megapixels (emulated ommatidias) as real-time video with an inter-ommatidial angle ∆ϕ = 0.5 ° at 18 mm radial distance. We made an FPGA implementation of the image processing system which is capable of generating 25 fps video with 1080 × 1080 pixel resolution at a 120 MHz processing clock frequency. When compared to similar size insect eye mimicking systems in literature, the system proposed in this paper features 1000 × resolution increase. To the best of our knowledge, this is the first time that a compound eye with built-in illumination idea is reported. We are offering our miniaturized imaging system for endoscopic applications like colonoscopy or laparoscopic surgery where there is a need for large field of view high definition imagery. For that purpose we tested our system inside a human colon model. We also present the resulting images and videos from the human colon model in this paper.
Remote autopsy services: A feasibility study on nine cases.
Vodovnik, Aleksandar; Aghdam, Mohammad Reza F; Espedal, Dan Gøran
2017-01-01
Introduction We have conducted a feasibility study on remote autopsy services in order to increase the flexibility of the service with benefits for teaching and interdepartmental collaboration. Methods Three senior staff pathologists, one senior autopsy technician and one junior resident participated in the study. Nine autopsies were performed by the autopsy technician or resident, supervised by the primary pathologist, through the secure, double encrypted video link using Jabber Video (Cisco) with a high-speed broadband connection. The primary pathologist and autopsy room each connected to the secure virtual meeting room using 14″ laptops with in-built cameras (Hewlett-Packard). A portable high-definition web camera (Cisco) was used in the autopsy room. Primary and secondary pathologists independently interpreted and later compared gross findings for the purpose of quality assurance. The video was streamed live only during consultations and interpretation. A satisfaction survey on technical and professional aspects of the study was conducted. Results Independent interpretations of gross findings between primary and secondary pathologists yielded full agreement. A definite cause of death in one complex autopsy was determined following discussions between pathologists and reviews of the clinical notes. Our satisfaction level with the technical and professional aspects of the study was 87% and 97%, respectively. Discussion Remote autopsy services are found to be feasible in the hands of experienced staff, with increased flexibility and interest of autopsy technicians in the service as a result.
Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates.
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2018-02-08
In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field.
Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2018-01-01
In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field. PMID:29419782
Perez-Garcia, H; Barquero, R
The correct determination and delineation of tumor/organ size is crucial in 2-D imaging in 131 I therapy. These images are usually obtained using a system composed of a Gamma camera and high-energy collimator, although the system can produce artifacts in the image. This article analyses these artifacts and describes a correction filter that can eliminate those collimator artifacts. Using free software, ImageJ, a central profile in the image is obtained and analyzed. Two components can be seen in the fluctuation of the profile: one associated with the stochastic nature of the radiation, plus electronic noise and the other periodically across the position in space due to the collimator. These frequencies are analytically obtained and compared with the frequencies in the Fourier transform of the profile. A specially developed filter removes the artifacts in the 2D Fourier transform of the DICOM image. This filter is tested using a 15-cm-diameter Petri dish with 131 I radioactive water (big object size) image, a 131 I clinical pill (small object size) image, and an image of the remainder of the lesion of two patients treated with 3.7GBq (100mCi), and 4.44GBq (120mCi) of 131 I, respectively, after thyroidectomy. The artifact is due to the hexagonal periodic structure of the collimator. The use of the filter on large-sized images reduces the fluctuation by 5.8-3.5%. In small-sized images, the FWHM can be determined in the filtered image, while this is impossible in the unfiltered image. The definition of tumor boundary and the visualization of the activity distribution inside patient lesions improve drastically when the filter is applied to the corresponding images obtained with HE gamma camera. The HURRA filter removes the artifact of high-energy collimator artifacts in planar images obtained with a Gamma camera without reducing the image resolution. It can be applied in any study of patient quantification because the number of counts remains invariant. The filter makes possible the definition and delimitation of small uptakes, such as those presented in treatments with 131 I. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
Second ISS Spacewalk in Two Weeks on This Week @NASA – September 2, 2016
2016-09-02
Outside the International Space Station, Expedition 48 Commander Jeff Williams and Flight Engineer Kate Rubins of NASA conducted a spacewalk Sept. 1 to retract a thermal radiator, install the first of several enhanced high definition cameras on the station’s truss and tighten bolts on a joint that enables one of the station’s solar arrays to rotate. This was the second spacewalk for the pair in just 13 days. They installed the station’s first international docking adapter during their previous spacewalk on Aug. 19. The adapter will provide a parking place for new U.S. commercial crew spacecraft delivering astronauts to the station on future missions. Also, Space Station Cameras Capture Hurricanes, Future Space Station Crews Prepare for Missions, Record-Breaking Galaxy Cluster Discovered, Up-Close with Jupiter, and more!
2017-01-17
The first time you see Planet Earth from space, it’s stunning; when you’ve spent 534 days in space—more than any other American—it still is! On his most recent trip the International Space Station NASA astronaut Jeff Williams brought an Ultra High Definition video camera that he pointed at the planet 250 miles below; here he shares some of those images, and talks about the beauty of the planet, the variety of things to see, and the value of sharing that perspective with everyone who can’t go to orbit in person. HD download link: https://archive.org/details/TheSpaceProgram UHD content download link: https://archive.org/details/NASA-Ultra-High-Definition _______________________________________ FOLLOW THE SPACE STATION! Twitter: https://twitter.com/Space_Station Facebook: https://www.facebook.com/ISS Instagram: https://instagram.com/iss/ YouTube: https://youtu.be/-nmNhKRzy4w
Mobile Situational Awareness Tool: Unattended Ground Sensor-Based Remote Surveillance System
2014-09-01
into prototyped WSNs. In 2012, the Raspberry Pi , an SBC with an Arm-Processor running Gnu/Linux also designed for students and hobbyists, entered...the market selling for only $25 each [30]. The Raspberry Pi was the size of a credit card, had the ability to connect to a wide variety of...peripherals to include Wi-Fi adapters and cameras, and had enough processing power to play high-definition video [31]. The Raspberry Pi proved to be
Uas for Archaeology - New Perspectives on Aerial Documentation
NASA Astrophysics Data System (ADS)
Fallavollita, P.; Balsi, M.; Esposito, S.; Melis, M. G.; Milanese, M.; Zappino, L.
2013-08-01
In this work some Unmanned Aerial Systems applications are discussed and applied to archaeological sites survey and 3D model reconstructions. Interesting results are shown for three important and different aged sites on north Sardinia (Italy). An easy and simplified procedure has proposed permitting the adoption of multi-rotor aircrafts for daily archaeological survey during excavation and documentation, involving state of art in UAS design, flight control systems, high definition sensor cameras and innovative photogrammetric software tools. Very high quality 3D models results are shown and discussed and how they have been simplified the archaeologist work and decisions.
NASA Astrophysics Data System (ADS)
Robert, K.; Matabos, M.; Sarrazin, J.; Sarradin, P.; Lee, R. W.; Juniper, K.
2010-12-01
Hydrothermal vent environments are among the most dynamic benthic habitats in the ocean. The relative roles of physical and biological factors in shaping vent community structure remain unclear. Undersea cabled observatories offer the power and bandwidth required for high-resolution, time-series study of the dynamics of vent communities and the physico-chemical forces that influence them. The NEPTUNE Canada cabled instrument array at the Endeavour hydrothermal vents provides a unique laboratory for researchers to conduct long-term, integrated studies of hydrothermal vent ecosystem dynamics in relation to environmental variability. Beginning in September-October 2010, NEPTUNE Canada (NC) will be deploying a multi-disciplinary suite of instruments on the Endeavour Segment of the Juan de Fuca Ridge. Two camera and sensor systems will be used to study ecosystem dynamics in relation to hydrothermal discharge. These studies will make use of new experimental protocols for time-series observations that we have been developing since 2008 at other observatory sites connected to the VENUS and NC networks. These protocols include sampling design, camera calibration (i.e. structure, position, light, settings) and image analysis methodologies (see communication by Aron et al.). The camera systems to be deployed in the Main Endeavour vent field include a Sidus high definition video camera (2010) and the TEMPO-mini system (2011), designed by IFREMER (France). Real-time data from three sensors (O2, dissolved Fe, temperature) integrated with the TEMPO-mini system will enhance interpretation of imagery. For the first year of observations, a suite of internally recording temperature probes will be strategically placed in the field of view of the Sidus camera. These installations aim at monitoring variations in vent community structure and dynamics (species composition and abundances, interactions within and among species) in response to changes in environmental conditions at different temporal scales. High-resolution time-series studies also provide a mean of studying population dynamics, biological rhythms, organism growth and faunal succession. In addition to programmed time-series monitoring, the NC infrastructure will also permit manual and automated modification of observational protocols in response to natural events. This will enhance our ability to document potentially critical but short-lived environmental forces affecting vent communities.
Gemming, Luke; Rush, Elaine; Maddison, Ralph; Doherty, Aiden; Gant, Nicholas; Utter, Jennifer; Ni Mhurchu, Cliona
2015-01-28
Preliminary research has suggested that wearable cameras may reduce under-reporting of energy intake (EI) in self-reported dietary assessment. The aim of the present study was to test the validity of a wearable camera-assisted 24 h dietary recall against the doubly labelled water (DLW) technique. Total energy expenditure (TEE) was assessed over 15 d using the DLW protocol among forty adults (n 20 males, age 35 (sd 17) years, BMI 27 (sd 4) kg/m2 and n 20 females, age 28 (sd 7) years, BMI 22 (sd 2) kg/m2). EI was assessed using three multiple-pass 24 h dietary recalls (MP24) on days 2-4, 8-10 and 13-15. On the days before each nutrition assessment, participants wore an automated wearable camera (SenseCam (SC)) in free-living conditions. The wearable camera images were viewed by the participants following the completion of the dietary recall, and their changes in self-reported intakes were recorded (MP24+SC). TEE and EI assessed by the MP24 and MP24+SC methods were compared. Among men, the MP24 and MP24+SC measures underestimated TEE by 17 and 9%, respectively (P< 0.001 and P= 0.02). Among women, these measures underestimated TEE by 13 and 7%, respectively (P< 0.001 and P= 0.004). The assistance of the wearable camera (MP24+SC) reduced the magnitude of under-reporting by 8% for men and 6% for women compared with the MP24 alone (P< 0.001 and P< 0.001). The increase in EI was predominantly from the addition of 265 unreported foods (often snacks) as revealed by the participants during the image review. Wearable cameras enhance the accuracy of self-report by providing passive and objective information regarding dietary intake. High-definition image sensors and increased imaging frequency may improve the accuracy further.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Clinical utility of scintimammography: From the Anger-camera to new dedicated devices
NASA Astrophysics Data System (ADS)
Schillaci, Orazio; Danieli, Roberta; Romano, Pasquale; Cossu, Elsa; Simonetti, Giovanni
2006-12-01
Scintimammography is a functional imaging technique which uses a radiation detection camera to detect radionuclide tracers in the patient's breasts. Tracers are designed to accumulate in tumours more than in healthy tissue: the most used are Tc-99 m sestamibi and Tc-99 m tetrofosmin. Scintimammography is useful in some clinical indications as an adjunct to mammography: it is recommended for those lesions where additional information is required to reach a definitive diagnosis. Patients with dubious mammograms may benefit from this test, as well as women with dense breasts or with implants. Scintimammography is a valuable diagnostic tool also in patients with locally advanced breast cancer for monitoring and predicting response to neoadjuvant chemotherapy. Nevertheless, using an Anger-camera this technique shows a high sensitivity only for cancers >1 cm. Since other modalities are increasingly employed for the early identification of small abnormalities, the issue of detecting small cancers is critical for the future development and clinical utility of breast imaging with radiopharmaceuticals. The use of high-resolution cameras dedicated for breast imaging is the best option to improve the detection of small cancers: they allow higher flexibility in patient positioning, and the availability of mammography-like projections. Moreover, the detector can be placed directly in contact with the breast allowing a mild compression with reduction of the breast's thickness, thus increasing the target-to-background ratio and the sensitivity. These new devices have the potential of increasing the total number of breast scintigraphies performed thereby enhancing the role of nuclear medicine in breast cancer imaging.
Komai, Tomoyuki; Tsuchida, Shinji
2014-02-11
Samples and images of deep-water benthic decapod crustaceans were collected from the Nikko Seamounts, Mariana Arc, at depths of 520-680 m, by using the remotely operate vehicle "Hyper-Dolphin", equipped with a high definition camera, digital camera, manipulators and slurp gun (suction sampler). The following seven species were collected, of which three are new to science: Plesionika unicolor n. sp. (Caridea: Pandalidae), Homeryon armarium Galil, 2000 (Polychelida: Polychelidae), Eumunida nikko n. sp. (Anomura: Eumunididae), Michelopagurus limatulus (Henderson, 1888) (Anomura: Paguridae), Galilia petricola n. sp. (Brachyura: Leucosiidae), Cyrtomaia micronesica Richer de Forges & Ng, 2007 (Brachyura: Inachidae), and Progeryon mus Ng & Guinot, 1999 (Brachyura: Progeryonidae). Affinities of these three new species are discussed. All but H. armarium are recorded from the Japanese Exclusive Economic Zone for the first time. Brief notes on ecology and/or behavior are given for each species.
Non-Invasive Detection of Anaemia Using Digital Photographs of the Conjunctiva.
Collings, Shaun; Thompson, Oliver; Hirst, Evan; Goossens, Louise; George, Anup; Weinkove, Robert
2016-01-01
Anaemia is a major health burden worldwide. Although the finding of conjunctival pallor on clinical examination is associated with anaemia, inter-observer variability is high, and definitive diagnosis of anaemia requires a blood sample. We aimed to detect anaemia by quantifying conjunctival pallor using digital photographs taken with a consumer camera and a popular smartphone. Our goal was to develop a non-invasive screening test for anaemia. The conjunctivae of haemato-oncology in- and outpatients were photographed in ambient lighting using a digital camera (Panasonic DMC-LX5), and the internal rear-facing camera of a smartphone (Apple iPhone 5S) alongside an in-frame calibration card. Following image calibration, conjunctival erythema index (EI) was calculated and correlated with laboratory-measured haemoglobin concentration. Three clinicians independently evaluated each image for conjunctival pallor. Conjunctival EI was reproducible between images (average coefficient of variation 2.96%). EI of the palpebral conjunctiva correlated more strongly with haemoglobin concentration than that of the forniceal conjunctiva. Using the compact camera, palpebral conjunctival EI had a sensitivity of 93% and 57% and specificity of 78% and 83% for detection of anaemia (haemoglobin < 110 g/L) in training and internal validation sets, respectively. Similar results were found using the iPhone camera, though the EI cut-off value differed. Conjunctival EI analysis compared favourably with clinician assessment, with a higher positive likelihood ratio for prediction of anaemia. Erythema index of the palpebral conjunctiva calculated from images taken with a compact camera or mobile phone correlates with haemoglobin and compares favourably to clinician assessment for prediction of anaemia. If confirmed in further series, this technique may be useful for the non-invasive screening for anaemia.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
High definition infrared chemical imaging of colorectal tissue using a Spero QCL microscope.
Bird, B; Rowlette, J
2017-04-10
Mid-infrared microscopy has become a key technique in the field of biomedical science and spectroscopy. This label-free, non-destructive technique permits the visualisation of a wide range of intrinsic biochemical markers in tissues, cells and biofluids by detection of the vibrational modes of the constituent molecules. Together, infrared microscopy and chemometrics is a widely accepted method that can distinguish healthy and diseased states with high accuracy. However, despite the exponential growth of the field and its research world-wide, several barriers currently exist for its full translation into the clinical sphere, namely sample throughput and data management. The advent and incorporation of quantum cascade lasers (QCLs) into infrared microscopes could help propel the field over these remaining hurdles. Such systems offer several advantages over their FT-IR counterparts, a simpler instrument architecture, improved photon flux, use of room temperature camera systems, and the flexibility of a tunable illumination source. In this current study we explore the use of a QCL infrared microscope to produce high definition, high throughput chemical images useful for the screening of biopsied colorectal tissue.
UrtheCast Second-Generation Earth Observation Sensors
NASA Astrophysics Data System (ADS)
Beckett, K.
2015-04-01
UrtheCast's Second-Generation state-of-the-art Earth Observation (EO) remote sensing platform will be hosted on the NASA segment of International Space Station (ISS). This platform comprises a high-resolution dual-mode (pushbroom and video) optical camera and a dual-band (X and L) Synthetic Aperture RADAR (SAR) instrument. These new sensors will complement the firstgeneration medium-resolution pushbroom and high-definition video cameras that were mounted on the Russian segment of the ISS in early 2014. The new cameras are expected to be launched to the ISS in late 2017 via the Space Exploration Technologies Corporation Dragon spacecraft. The Canadarm will then be used to install the remote sensing platform onto a CBM (Common Berthing Mechanism) hatch on Node 3, allowing the sensor electronics to be accessible from the inside of the station, thus limiting their exposure to the space environment and allowing for future capability upgrades. The UrtheCast second-generation system will be able to take full advantage of the strengths that each of the individual sensors offers, such that the data exploitation capabilities of the combined sensors is significantly greater than from either sensor alone. This represents a truly novel platform that will lead to significant advances in many other Earth Observation applications such as environmental monitoring, energy and natural resources management, and humanitarian response, with data availability anticipated to begin after commissioning is completed in early 2018.
Ultra-high definition (8K UHD) endoscope: our first clinical success.
Yamashita, Hiromasa; Aoki, Hisae; Tanioka, Kenkichi; Mori, Toshiyuki; Chiba, Toshio
2016-01-01
We have started clinical application of 8K ultra-high definition (UHD; 7680 × 4320 pixels) imaging technology, which is a 16-fold higher resolution than the current 2K high-definition (HD; 1920 × 1080 pixels) technology, to an endoscope for advanced laparoscopic surgery. Based on preliminary testing experience and with subsequent technical and system improvements, we then proceeded to perform two cases of cholecystectomy and were able to achieve clinical success with an 8K UHD endoscopic system, which consisted of an 8K camera, a 30-degrees angled rigid endoscope with a lens adapter, a pair of 300-W xenon light sources, an 85-inch 8K LCD and an 8K video recorder. These experimental and clinical studies revealed the engineering and clinical feasibility of the 8K UHD endoscope, enabling us to have a positive outlook on its prospective use in clinical practice. The 8K UHD endoscopy promises to open up new possibilities for intricate procedures including anastomoses of thin nerves and blood vessels as well as more confident surgical resections of a diversity of cancer tissues. 8K endoscopic imaging, compared to imaging by the current 2K imaging technology, is very likely to lead to major changes in the future of medical practice.
Geometrical distortion calibration of the stereo camera for the BepiColombo mission to Mercury
NASA Astrophysics Data System (ADS)
Simioni, Emanuele; Da Deppo, Vania; Re, Cristina; Naletto, Giampiero; Martellato, Elena; Borrelli, Donato; Dami, Michele; Aroldi, Gianluca; Ficai Veltroni, Iacopo; Cremonese, Gabriele
2016-07-01
The ESA-JAXA mission BepiColombo that will be launched in 2018 is devoted to the observation of Mercury, the innermost planet of the Solar System. SIMBIOSYS is its remote sensing suite, which consists of three instruments: the High Resolution Imaging Channel (HRIC), the Visible and Infrared Hyperspectral Imager (VIHI), and the Stereo Imaging Channel (STC). The latter will provide the global three dimensional reconstruction of the Mercury surface, and it represents the first push-frame stereo camera on board of a space satellite. Based on a new telescope design, STC combines the advantages of a compact single detector camera to the convenience of a double direction acquisition system; this solution allows to minimize mass and volume performing a push-frame imaging acquisition. The shared camera sensor is divided in six portions: four are covered with suitable filters; the others, one looking forward and one backwards with respect to nadir direction, are covered with a panchromatic filter supplying stereo image pairs of the planet surface. The main STC scientific requirements are to reconstruct in 3D the Mercury surface with a vertical accuracy better than 80 m and performing a global imaging with a grid size of 65 m along-track at the periherm. Scope of this work is to present the on-ground geometric calibration pipeline for this original instrument. The selected STC off-axis configuration forced to develop a new distortion map model. Additional considerations are connected to the detector, a Si-Pin hybrid CMOS, which is characterized by a high fixed pattern noise. This had a great impact in pre-calibration phases compelling to use a not common approach to the definition of the spot centroids in the distortion calibration process. This work presents the results obtained during the calibration of STC concerning the distortion analysis for three different temperatures. These results are then used to define the corresponding distortion model of the camera.
N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao
2017-05-31
Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.
Distortion definition and correction in off-axis systems
NASA Astrophysics Data System (ADS)
Da Deppo, Vania; Simioni, Emanuele; Naletto, Giampiero; Cremonese, Gabriele
2015-09-01
Off-axis optical configurations are becoming more and more used in a variety of applications, in particular they are the most preferred solution for cameras devoted to Solar System planets and small bodies (i.e. asteroids and comets) study. Off-axis designs, being devoid of central obstruction, are able to guarantee better PSF and MTF performance, and thus higher contrast imaging capabilities with respect to classical on-axis designs. In particular they are suitable for observing extended targets with intrinsic low contrast features, or scenes where a high dynamical signal range is present. Classical distortion theory is able to well describe the performance of the on-axis systems, but it has to be adapted for the off-axis case. A proper way to deal with off-axis distortion definition is thus needed together with dedicated techniques to accurately measure and hence remove the distortion effects present in the acquired images. In this paper, a review of the distortion definition for off-axis systems will be given. In particular the method adopted by the authors to deal with the distortion related issues (definition, measure, removal) in some off-axis instruments will be described in detail.
The Wide Field Imager instrument for Athena
NASA Astrophysics Data System (ADS)
Meidinger, Norbert; Barbera, Marco; Emberger, Valentin; Fürmetz, Maria; Manhart, Markus; Müller-Seidlitz, Johannes; Nandra, Kirpal; Plattner, Markus; Rau, Arne; Treberspurg, Wolfgang
2017-08-01
ESA's next large X-ray mission ATHENA is designed to address the Cosmic Vision science theme 'The Hot and Energetic Universe'. It will provide answers to the two key astrophysical questions how does ordinary matter assemble into the large-scale structures we see today and how do black holes grow and shape the Universe. The ATHENA spacecraft will be equipped with two focal plane cameras, a Wide Field Imager (WFI) and an X-ray Integral Field Unit (X-IFU). The WFI instrument is optimized for state-of-the-art resolution spectroscopy over a large field of view of 40 amin x 40 amin and high count rates up to and beyond 1 Crab source intensity. The cryogenic X-IFU camera is designed for high-spectral resolution imaging. Both cameras share alternately a mirror system based on silicon pore optics with a focal length of 12 m and large effective area of about 2 m2 at an energy of 1 keV. Although the mission is still in phase A, i.e. studying the feasibility and developing the necessary technology, the definition and development of the instrumentation made already significant progress. The herein described WFI focal plane camera covers the energy band from 0.2 keV to 15 keV with 450 μm thick fully depleted back-illuminated silicon active pixel sensors of DEPFET type. The spatial resolution will be provided by one million pixels, each with a size of 130 μm x 130 μm. The time resolution requirement for the WFI large detector array is 5 ms and for the WFI fast detector 80 μs. The large effective area of the mirror system will be completed by a high quantum efficiency above 90% for medium and higher energies. The status of the various WFI subsystems to achieve this performance will be described and recent changes will be explained here.
The Endockscope Using Next Generation Smartphones: "A Global Opportunity".
Tse, Christina; Patel, Roshan M; Yoon, Renai; Okhunov, Zhamshid; Landman, Jaime; Clayman, Ralph V
2018-06-02
The Endockscope combines a smartphone, a battery powered flashlight and a fiberoptic cystoscope allowing for mobile videocystoscopy. We compared conventional videocystoscopy to the Endockscope paired with next generation smartphones in an ex-vivo porcine bladder model to evaluate its image quality. The Endockscope consists of a three-dimensional (3D) printed attachment that connects a smartphone to a flexible fiberoptic cystoscope plus a 1000 lumen light-emitting diode (LED) cordless light source. Video recordings of porcine cystoscopy with a fiberoptic flexible cystoscope (Storz) were captured for each mobile device (iPhone 6, iPhone 6S, iPhone 7, Samsung S8, and Google Pixel) and for the high-definition H3-Z versatile camera (HD) set-up with both the LED light source and the xenon light (XL) source. Eleven faculty urologists, blinded to the modality used, evaluated each video for image quality/resolution, brightness, color quality, sharpness, overall quality, and acceptability for diagnostic use. When comparing the Endockscope coupled to an Galaxy S8, iPhone 7, and iPhone 6S with the LED portable light source to the HD camera with XL, there were no statistically significant differences in any metric. 82% and 55% of evaluators considered the iPhone 7 + LED light source and iPhone 6S + LED light, respectively, appropriate for diagnostic purposes as compared to 100% who considered the HD camera with XL appropriate. The iPhone 6 and Google Pixel coupled with the LED source were both inferior to the HD camera with XL in all metrics. The Endockscope system with a LED light source when coupled with either an iPhone 7 or Samsung S8 (total cost: $750) is comparable to conventional videocystoscopy with a standard camera and XL light source (total cost: $45,000).
NASA Astrophysics Data System (ADS)
Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.
2017-07-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.
Expedition 48/49 crew visit to MSFC
2017-04-06
NASA astronaut Kate Rubins presents highlights from Expedition 48/49, her mission to the International Space Station, to team members and Space Camp students from the U.S. Space & Rocket Center in Huntsville, April 6 at NASA's Marshall Space Flight Center. During her mission, Rubins became the first person to sequence DNA in space, researching technology development for deep-space exploration by humans, Earth and space science. She also conducted two spacewalks, in which she and NASA astronaut Jeff Williams installed an International Docking Adapter and performed maintenance of the station's external thermal control system and installed high-definition cameras.
View of portion of Mediterranean Coast of Turkey and Syria
1975-07-20
AST-16-1268 (20 July 1975) --- A near vertical view of a portion of the Mediterranean coast of Turkey and Syria, as photographed from the Apollo spacecraft in Earth orbit during the joint U.S-USSR Apollo-Soyuz Test Project mission. This view covers the Levant Coast north of Beirut, showing the cities of Aleppo, Hamah, Homs and Latakia. The Levantine rift bends to the northeast. This picture was taken with a 70mm Hasselblad camera using high-definition aerial Ektachrome SO-242 type film. The altitude of the spacecraft was 225 kilometers (140 statute miles) when this photograph was taken.
NASA Technical Reports Server (NTRS)
1976-01-01
Trade studies were conducted to ensure the overall feasibility of the focal plane camera in a radial module. The primary variable in the trade studies was the location of the pickoff mirror, on axis versus off-axis. Two alternatives were: (1) the standard (electromagnetic focus) SECO submodule, and (2) the MOD 15 permanent magnet focus SECO submodule. The technical areas of concern were the packaging affected parameters of thermal dissipation, focal plane obscuration, and image quality.
Exact optics - III. Schwarzschild's spectrograph camera revised
NASA Astrophysics Data System (ADS)
Willstrop, R. V.
2004-03-01
Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
In memoriam: Fumio Okano, innovator of 3D display
NASA Astrophysics Data System (ADS)
Arai, Jun
2014-06-01
Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.
The LUVOIR Large Mission Concept
NASA Astrophysics Data System (ADS)
O'Meara, John; LUVOIR Science and Technology Definition Team
2018-01-01
LUVOIR is one of four large mission concepts for which the NASA Astrophysics Division has commissioned studies by Science and Technology Definition Teams (STDTs) drawn from the astronomical community. We are currently developing two architectures: Architecture A with a 15.1 meter segmented primary mirror, and Architecture B with a 9.2 meter segmented primary mirror. Our focus in this presentation is the Architecture A LUVOIR. LUVOIR will operate at the Sun-Earth L2 point. It will be designed to support a broad range of astrophysics and exoplanet studies. The initial instruments developed for LUVOIR Architecture A include 1) a high-performance optical/NIR coronagraph with imaging and spectroscopic capability, 2) a UV imager and spectrograph with high spectral resolution and multi-object capability, 3) a high-definition wide-field optical/NIR camera, and 4) a high resolution UV/optical spectropolarimeter. LUVOIR will be designed for extreme stability to support unprecedented spatial resolution and coronagraphy. It is intended to be a long-lifetime facility that is both serviceable, upgradable, and primarily driven by guest observer science programs. In this presentation, we will describe the observatory, its instruments, and survey the transformative science LUVOIR can accomplish.
A 3-D mixed-reality system for stereoscopic visualization of medical dataset.
Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco
2009-11-01
We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.
Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto
2014-11-01
The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Highly integrated Pluto payload system (HIPPS): a sciencecraft instrument for the Pluto mission
NASA Astrophysics Data System (ADS)
Stern, S. Alan; Slater, David C.; Gibson, William; Reitsema, Harold J.; Delamere, W. Alan; Jennings, Donald E.; Reuter, D. C.; Clarke, John T.; Porco, Carolyn C.; Shoemaker, Eugene M.; Spencer, John R.
1995-09-01
We describe the design concept for the highly integrated Pluto payload system (HIPPS): a highly integrated, low-cost, light-weight, low-power instrument payload designed to fly aboard the proposed NASA Pluto flyby spacecraft destined for the Pluto/Charon system. The HIPPS payload is designed to accomplish all of the Pluto flyby prime (IA) science objectives, except radio science, set forth by NASA's Outer Planets Science Working Group (OPSWG) and the Pluto Express Science Definition Team (SDT). HIPPS contains a complement of three instrument components within one common infrastructure; these are: (1) a visible/near UV CCD imaging camera; (2) an infrared spectrograph; and (3) an ultraviolet spectrograph. A detailed description of each instrument is presented along with how they will meet the IA science requirements.
Statis omnidirectional stereoscopic display system
NASA Astrophysics Data System (ADS)
Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.
1999-11-01
A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.
Vasu, Subith S.; Pryor, Owen; Barak, Samuel; ...
2017-03-12
Common definitions for ignition delay time are often hard to determine due to the issue of bifurcation and other non-idealities that result from high levels of CO 2 addition. Using high-speed camera imagery in comparison with more standard methods (e.g., pressure, emission, and laser absorption spectroscopy) to measure the ignition delay time, the effect of bifurcation has been examined in this study. Experiments were performed at pressures between 0.6 and 1.2 atm for temperatures between 1650 and 2040 K. The equivalence ratio for all experiments was kept at a constant value of 1 with methane as the fuel. The COmore » 2 mole fraction was varied between a value of X CO2 = 0.00 to 0.895. The ignition delay time was determined from three different measurements at the sidewall: broadband chemiluminescent emission captured via a photodetector, CH 4 concentrations determined using a distributed feedback interband cascade laser centered at 3403.4 nm, and pressure recorded via a dynamic Kistler type transducer. All methods for the ignition delay time were compared to high-speed camera images taken of the axial cross-section during combustion. Methane time-histories and the methane decay times were also measured using the laser. It was determined that the flame could be correlated to the ignition delay time measured at the side wall but that the flame as captured by the camera was not homogeneous as assumed in typical shock tube experiments. The bifurcation of the shock wave resulted in smaller flames with large boundary layers and that the flame could be as small as 30% of the cross-sectional area of the shock tube at the highest levels of CO 2 dilution. Here, comparisons between the camera images and the different ignition delay time methods show that care must be taken in interpreting traditional ignition delay data for experiments with large bifurcation effects as different methods in measuring the ignition delay time could result in different interpretations of kinetic mechanisms and impede the development of future mechanisms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasu, Subith S.; Pryor, Owen; Barak, Samuel
Common definitions for ignition delay time are often hard to determine due to the issue of bifurcation and other non-idealities that result from high levels of CO 2 addition. Using high-speed camera imagery in comparison with more standard methods (e.g., pressure, emission, and laser absorption spectroscopy) to measure the ignition delay time, the effect of bifurcation has been examined in this study. Experiments were performed at pressures between 0.6 and 1.2 atm for temperatures between 1650 and 2040 K. The equivalence ratio for all experiments was kept at a constant value of 1 with methane as the fuel. The COmore » 2 mole fraction was varied between a value of X CO2 = 0.00 to 0.895. The ignition delay time was determined from three different measurements at the sidewall: broadband chemiluminescent emission captured via a photodetector, CH 4 concentrations determined using a distributed feedback interband cascade laser centered at 3403.4 nm, and pressure recorded via a dynamic Kistler type transducer. All methods for the ignition delay time were compared to high-speed camera images taken of the axial cross-section during combustion. Methane time-histories and the methane decay times were also measured using the laser. It was determined that the flame could be correlated to the ignition delay time measured at the side wall but that the flame as captured by the camera was not homogeneous as assumed in typical shock tube experiments. The bifurcation of the shock wave resulted in smaller flames with large boundary layers and that the flame could be as small as 30% of the cross-sectional area of the shock tube at the highest levels of CO 2 dilution. Here, comparisons between the camera images and the different ignition delay time methods show that care must be taken in interpreting traditional ignition delay data for experiments with large bifurcation effects as different methods in measuring the ignition delay time could result in different interpretations of kinetic mechanisms and impede the development of future mechanisms.« less
NASA Astrophysics Data System (ADS)
Ramirez, C. J.; Mora-Amador, R. A., Sr.; Alpizar Segura, Y.; González, G.
2015-12-01
Monitoring volcanoes have been on the past decades an expanding matter, one of the rising techniques that involve new technology is the digital video surveillance, and the automated software that come within, now is possible if you have the budget and some facilities on site, to set up a real-time network of high definition video cameras, some of them even with special features like infrared, thermal, ultraviolet, etc. That can make easier or harder the analysis of volcanic phenomena like lava eruptions, phreatic eruption, plume speed, lava flows, close/open vents, just to mention some of the many application of these cameras. We present the methodology of the installation at Poás volcano of a real-time system for processing and storing HD and thermal images and video, also the process to install and acquired the HD and IR cameras, towers, solar panels and radios to transmit the data on a volcano located at the tropics, plus what volcanic areas are our goal and why. On the other hand we show the hardware and software we consider necessary to carry on our project. Finally we show some early data examples of upwelling areas on the Poás volcano hyperacidic lake and the relation with lake phreatic eruptions, also some data of increasing temperature on an old dome wall and the suddenly wall explosions, and the use of IR video for measuring plume speed and contour for use on combination with DOAS or FTIR measurements.
Real-time full-motion color Flash lidar for target detection and identification
NASA Astrophysics Data System (ADS)
Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt
2015-05-01
Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.
NASA Astrophysics Data System (ADS)
Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.
2017-12-01
Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.
The Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR)
NASA Astrophysics Data System (ADS)
Peterson, Bradley M.; Fischer, Debra; LUVOIR Science and Technology Definition Team
2017-01-01
LUVOIR is one of four potential large mission concepts for which the NASA Astrophysics Division has commissioned studies by Science and Technology Definition Teams (STDTs) drawn from the astronomical community. LUVOIR will have an 8 to16-m segmented primary mirror and operate at the Sun-Earth L2 point. It will be designed to support a broad range of astrophysics and exoplanet studies. The notional initial complement of instruments will include 1) a high-performance optical/NIR coronagraph with imaging and spectroscopic capability, 2) a UV imager and spectrograph with high spectral resolution and multi-object capability, 3) a high-definition wide-field optical/NIR camera, and 4) a multi-resolution optical/NIR spectrograph. LUVOIR will be designed for extreme stability to support unprecedented spatial resolution and coronagraphy. It is intended to be a long-lifetime facility that is both serviceable and upgradable. This is the first report by the LUVOIR STDT to the community on the top-level architectures we are studying, including preliminary capabilities of a mission with those parameters. The STDT seeks feedback from the astronomical community for key science investigations that can be undertaken with the notional instrument suite and to identify desirable capabilities that will enable additional key science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-02-01
Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.
The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second
NASA Technical Reports Server (NTRS)
Miller, Cearcy D
1946-01-01
The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.
Automated assembly of camera modules using active alignment with up to six degrees of freedom
NASA Astrophysics Data System (ADS)
Bräuniger, K.; Stickler, D.; Winters, D.; Volmer, C.; Jahn, M.; Krey, S.
2014-03-01
With the upcoming Ultra High Definition (UHD) cameras, the accurate alignment of optical systems with respect to the UHD image sensor becomes increasingly important. Even with a perfect objective lens, the image quality will deteriorate when it is poorly aligned to the sensor. For evaluating the imaging quality the Modulation Transfer Function (MTF) is used as the most accepted test. In the first part it is described how the alignment errors that lead to a low imaging quality can be measured. Collimators with crosshair at defined field positions or a test chart are used as object generators for infinite-finite or respectively finite-finite conjugation. The process how to align the image sensor accurately to the optical system will be described. The focus position, shift, tilt and rotation of the image sensor are automatically corrected to obtain an optimized MTF for all field positions including the center. The software algorithm to grab images, calculate the MTF and adjust the image sensor in six degrees of freedom within less than 30 seconds per UHD camera module is described. The resulting accuracy of the image sensor rotation is better than 2 arcmin and the accuracy position alignment in x,y,z is better 2 μm. Finally, the process of gluing and UV-curing is described and how it is managed in the integrated process.
Bell, James F.; Godber, A.; McNair, S.; Caplinger, M.A.; Maki, J.N.; Lemmon, M.T.; Van Beek, J.; Malin, M.C.; Wellington, D.; Kinch, K.M.; Madsen, M.B.; Hardgrove, C.; Ravine, M.A.; Jensen, E.; Harker, D.; Anderson, Ryan; Herkenhoff, Kenneth E.; Morris, R.V.; Cisneros, E.; Deen, R.G.
2017-01-01
The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted ~2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) “true color” images, multispectral images in nine additional bands spanning ~400–1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration
The HRSC on Mars Express: Mert Davies' Involvement in a Novel Planetary Cartography Experiment
NASA Astrophysics Data System (ADS)
Oberst, J.; Waehlisch, M.; Giese, B.; Scholten, F.; Hoffmann, H.; Jaumann, R.; Neukum, G.
2002-12-01
Mert Davies was a team member of the HRSC (High Resolution Stereo Camera) imaging experiment (PI: Gerhard Neukum) on ESA's Mars Express mission. This pushbroom camera is equipped with 9 forward- and backward-looking CCD lines, 5184 samples each, mounted in parallel, perpendicular to the spacecraft velocity vector. Flight image data with resolutions of up to 10m/pix (from an altitude of 250 km) will be acquired line by line as the spacecraft moves. This acquisition strategy will result in 9 separate almost completely overlapping image strips, each of them having more than 27,000 image lines, typically. [HRSC is also equipped with a superresolution channel for imaging of selected targets at up to 2.3 m/pixel]. The combined operation of the nadir and off-nadir CCD lines (+18.9°, 0°, -18.9°) gives HRSC a triple-stereo capability for precision mapping of surface topography and for modelling of spacecraft orbit- and camera pointing errors. The goals of the camera are to obtain accurate control point networks, Digital Elevation Models (DEMs) in Mars-fixed coordinates, and color orthoimages at global (100% of the surface will be covered with resolutions better than 30m/pixel) and local scales. With his long experience in all aspects of planetary geodesy and cartography, Mert Davies was involved in the preparations of this novel Mars imaging experiment which included: (a) development of a ground data system for the analysis of triple-stereo images, (b) camera testing during airborne imaging campaigns, (c) re-analysis of the Mars control point network, and generation of global topographic orthoimage maps on the basis of MOC images and MOLA data, (d) definition of the quadrangle scheme for a new topographic image map series 1:200K, (e) simulation of synthetic HRSC imaging sequences and their photogrammetric analysis. Mars Express is scheduled for launch in May of 2003. We miss Mert very much!
Herring, G.; Ackerman, Joshua T.; Takekawa, John Y.; Eagles-Smith, Collin A.; Eadie, J.M.
2011-01-01
We evaluated predation on nests and methods to detect predators using a combination of infrared cameras and plasticine eggs at nests of American avocets (Recurvirostra americana) and black-necked stilts (Himantopus mexicanus) in Don Edwards San Francisco Bay National Wildlife Refuge, San Mateo and Santa Clara counties, California. Each technique indicated that predation was prevalent; 59% of monitored nests were depredated. Most identifiable predation (n = 49) was caused by mammals (71%) and rates of predation were similar on avocets and stilts. Raccoons (Procyon lotor) and striped skunks (Mephitis mephitis) each accounted for 16% of predations, whereas gray foxes (Urocyon cinereoargenteus) and avian predators each accounted for 14%. Mammalian predation was mainly nocturnal (mean time, 0051 h ?? 5 h 36 min), whereas most avian predation was in late afternoon (mean time, 1800 h ?? 1 h 26 min). Nests with cameras and plasticine eggs were 1.6 times more likely to be predated than nests where only cameras were used in monitoring. Cameras were associated with lower abandonment of nests and provided definitive identification of predators.
Herring, Garth; Ackerman, Joshua T.; Takekawa, John Y.; Eagles-Smith, Collin A.; Eadie, John M.
2011-01-01
We evaluated predation on nests and methods to detect predators using a combination of infrared cameras and plasticine eggs at nests of American avocets (Recurvirostra americana) and black-necked stilts (Himantopus mexicanus) in Don Edwards San Francisco Bay National Wildlife Refuge, San Mateo and Santa Clara counties, California. Each technique indicated that predation was prevalent; 59% of monitored nests were depredated. Most identifiable predation (n = 49) was caused by mammals (71%) and rates of predation were similar on avocets and stilts. Raccoons (Procyon lotor) and striped skunks (Mephitis mephitis) each accounted for 16% of predations, whereas gray foxes (Urocyon cinereoargenteus) and avian predators each accounted for 14%. Mammalian predation was mainly nocturnal (mean time, 0051 h +/- 5 h 36 min), whereas most avian predation was in late afternoon (mean time, 1800 h +/- 1 h 26 min). Nests with cameras and plasticine eggs were 1.6 times more likely to be predated than nests where only cameras were used in monitoring. Cameras were associated with lower abandonment of nests and provided definitive identification of predators.
Høye, Gudrun; Fridman, Andrei
2013-05-06
Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.
Rad-hard Dual-threshold High-count-rate Silicon Pixel-array Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Adam
In this program, a Voxtel-led team demonstrates a full-format (192 x 192, 100-µm pitch, VX-810) high-dynamic-range x-ray photon-counting sensor—the Dual Photon Resolved Energy Acquisition (DUPREA) sensor. Within the Phase II program the following tasks were completed: 1) system analysis and definition of the DUPREA sensor requirements; 2) design, simulation, and fabrication of the full-format VX-810 ROIC design; 3) design, optimization, and fabrication of thick, fully depleted silicon photodiodes optimized for x-ray photon collection; 4) hybridization of the VX-810 ROIC to the photodiode array in the creation of the optically sensitive focal-plane array; 5) development of an evaluation camera; and 6)more » electrical and optical characterization of the sensor.« less
High Speed Digital Camera Technology Review
NASA Technical Reports Server (NTRS)
Clements, Sandra D.
2009-01-01
A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.
High-resolution CCD imaging alternatives
NASA Astrophysics Data System (ADS)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
High Resolution Airborne Digital Imagery for Precision Agriculture
NASA Technical Reports Server (NTRS)
Herwitz, Stanley R.
1998-01-01
The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
Development of high-speed video cameras
NASA Astrophysics Data System (ADS)
Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk
2001-04-01
Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
A comparison of select image-compression algorithms for an electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.
Experiences with Acquiring Highly Redundant Spatial Data to Support Driverless Vehicle Technologies
NASA Astrophysics Data System (ADS)
Koppanyi, Z.; Toth, C. K.
2018-05-01
As vehicle technology is moving towards higher autonomy, the demand for highly accurate geospatial data is rapidly increasing, as accurate maps have a huge potential of increasing safety. In particular, high definition 3D maps, including road topography and infrastructure, as well as city models along the transportation corridors represent the necessary support for driverless vehicles. In this effort, a vehicle equipped with high-, medium- and low-resolution active and passive cameras acquired data in a typical traffic environment, represented here by the OSU campus, where GPS/GNSS data are available along with other navigation sensor data streams. The data streams can be used for two purposes. First, high-definition 3D maps can be created by integrating all the sensory data, and Data Analytics/Big Data methods can be tested for automatic object space reconstruction. Second, the data streams can support algorithmic research for driverless vehicle technologies, including object avoidance, navigation/positioning, detecting pedestrians and bicyclists, etc. Crucial cross-performance analyses on map database resolution and accuracy with respect to sensor performance metrics to achieve economic solution for accurate driverless vehicle positioning can be derived. These, in turn, could provide essential information on optimizing the choice of geospatial map databases and sensors' quality to support driverless vehicle technologies. The paper reviews the data acquisition and primary data processing challenges and performance results.
NASA Astrophysics Data System (ADS)
Hayakawa, Yuichi S.; Obanawa, Hiroyuki; Yoshida, Hidetsugu; Naruhashi, Ryutaro; Okumura, Koji; Zaiki, Masumi
2016-04-01
Debris avalanche caused by sector collapse of a volcanic mountain often forms depositional landforms with characteristic surface morphology comprising hummocks. Geomorphological and sedimentological analyses of debris avalanche deposits (DAD) at the northeastern face of Mt. Erciyes in central Turkey have been performed to investigate the mechanisms and processes of the debris avalanche. The morphometry of hummocks provides an opportunity to examine the volumetric and kinematic characteristics of the DAD. Although the exact age has been unknown, the sector collapse of this DAD was supposed to have occurred in the late Pleistocene (sometime during 90-20 ka), and subsequent sediment supply from the DAD could have affected ancient human activities in the downstream basin areas. In order to measure detailed surface morphology and depositional structures of the DAD, we apply structure-from-motion multi-view stereo (SfM-MVS) photogrammetry using unmanned aerial system (UAS) and a handheld camera. The UAS, including small unmanned aerial vehicle (sUAV) and a digital camera, provides low-altitude aerial photographs to capture surface morphology for an area of several square kilometers. A high-resolution topographic data, as well as an orthorectified image, of the hummocks were then obtained from the digital elevation model (DEM), and the geometric features of the hummocks were examined. A handheld camera is also used to obtain photographs of outcrop face of the DAD along a road to support the seimentological investigation. The three-dimensional topographic models of the outcrop, with a panoramic orthorectified image projected on a vertical plane, were obtained. This data enables to effectively describe sedimentological structure of the hummock in DAD. The detailed map of the DAD is also further examined with a regional geomorphological map to be compared with other geomorphological features including fluvial valleys, terraces, lakes and active faults.
NASA Technical Reports Server (NTRS)
Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)
1985-01-01
Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Clinical applications with the HIDAC positron camera
NASA Astrophysics Data System (ADS)
Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.
1988-06-01
A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation, and more detailed data on a larger number of clinical and experimental PET scans will be necessary for definitive evaluation. Nevertheless, the HIDAC positron camera may be used for clinical PET imaging in well-defined patient cases, particularly in situations where both high spatial resolution is desired in the reconstructed image of the examined pathological condition and at the same time "static" PET imaging may be adequate, as is the case in thyroid-, ENT- and liver tomographic imaging using the HIDAC positron camera.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
NASA Astrophysics Data System (ADS)
Genovese, Mariangela; Napoli, Ettore
2013-05-01
The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.
High-speed railway real-time localization auxiliary method based on deep neural network
NASA Astrophysics Data System (ADS)
Chen, Dongjie; Zhang, Wensheng; Yang, Yang
2017-11-01
High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.
2017-03-01
Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.
Multi-color pyrometry imaging system and method of operating the same
Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde
2017-03-21
A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
Chambers, T; Pearson, A L; Kawachi, I; Rzotkiewicz, Z; Stanley, J; Smith, M; Barr, M; Ni Mhurchu, C; Signal, L
2017-11-01
Defining the boundary of children's 'neighborhoods' has important implications for understanding the contextual influences on child health. Additionally, insight into activities that occur outside people's neighborhoods may indicate exposures that place-based studies cannot detect. This study aimed to 1) extend current neighborhood research, using data from wearable cameras and GPS devices that were worn over several days in an urban setting; 2) define the boundary of children's neighborhoods by using leisure time activity space data; and 3) determine the destinations visited by children in their leisure time, outside their neighborhoods. One hundred and fourteen children (mean age 12y) from Wellington, New Zealand wore wearable cameras and GPS recorders. Residential Euclidean buffers at incremental distances were paired with GPS data (thereby identifying time spent in different places) to explore alternative definitions of neighborhood boundaries. Children's neighborhood boundary was at 500 m. A newly developed software application was used to identify 'destinations' visited outside the neighborhood by specifying space-time parameters. Image data from wearable cameras were used to determine the type of destination. Children spent over half of their leisure time within 500 m of their homes. Children left their neighborhood predominantly to visit school (for leisure purposes), other residential locations (e.g. to visit friends) and food retail outlets (e.g. convenience stores, fast food outlets). Children spent more time at food retail outlets than at structured sport and in outdoor recreation locations combined. Person-centered neighborhood definitions may serve to better represent children's everyday experiences and neighborhood exposures than previous methods based on place-based measures. As schools and other residential locations (friends and family) are important destinations outside the neighborhood, such destinations should be taken into account. The combination of image data and activity space GPS data provides a more robust approach to understanding children's neighborhoods and activity spaces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Developing Short Films of Geoscience Research
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Webley, P. W.; Dehn, J.; Harrild, M.; Kienenberger, D.; Salganek, M.
2015-12-01
In today's prevalence of social media and networking, video products are becoming increasingly more useful to communicate research quickly and effectively to a diverse audience, including outreach activities as well as within the research community and to funding agencies. Due to the observational nature of geoscience, researchers often take photos and video footage to document fieldwork or to record laboratory experiments. Here we present how researchers can become more effective storytellers by collaborating with filmmakers to produce short documentary films of their research. We will focus on the use of traditional high-definition (HD) camcorders and HD DSLR cameras to record the scientific story while our research topic focuses on the use of remote sensing techniques, specifically thermal infrared imaging that is often used to analyze time varying natural processes such as volcanic hazards. By capturing the story in the thermal infrared wavelength range, in addition to traditional red-green-blue (RGB) color space, the audience is able to experience the world differently. We will develop a short film specifically designed using thermal infrared cameras that illustrates how visual storytellers can use these new tools to capture unique and important aspects of their research, convey their passion for earth systems science, as well as engage and captive the viewer.
NASA Astrophysics Data System (ADS)
Shank, T. M.
2016-02-01
From 2012 to 2015, annual seafloor surveys using the towed camera TowCam were used to characterize benthic ecosystems and habitats to groundtruth recently developed habitat suitability models that predict deep-sea coral locations in northwest Atlantic canyons. Faunal distribution, abundance, and habitat data were obtained from more than 90 towed camera surveys in 21 canyons, specifically Tom's, Hendrickson, Veatch, Gilbert, Ryan, Powell, Munson, Accomac, Leonard, Washington, Wilmington, Lindenkohl, Clipper, Sharpshooter, Welker, Dogbody, Chebacco, Heel Tapper, File Bottom, Carteret, and Spencer Canyons, as well as unnamed minor canyons and inter-canyon areas. We also investigated additional canyons including Block, Alvin, Atlantis, Welker, Heezen, Phoenix, McMaster, Nantucket, and two minor canyons and two intercanyon areas through high-definition ROV image surveys from the NOAA CANEX 2013 and 2014 expeditions. Significant differences in species composition and distribution correlated with specific habitat types, depth, and individual canyons. High abundances and diversity of scleractinians, antipatharians, octocorals and sponges were highly correlated with habitat substrates, includingvertical canyon walls, margins, sediments, cobbles, boulders, and coral rubble habitat. Significant differences in species composition among canyons were observed across similar depths suggesting that many canyons may have their own biological and geological signature. Locating and defining the composition and distribution of vulnerable coral ecosystems in canyons in concert with validating predictive species distribution modeling has resulted in the regional management and conservation recommendations of these living resources and the largest proposed Marine Protected Area in North American waters.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
Hobbs, Michael T.; Brehme, Cheryl S.
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.
Hobbs, Michael T; Brehme, Cheryl S
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
NASA Astrophysics Data System (ADS)
Jaanimagi, Paul A.
1992-01-01
This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
2004-09-14
KENNEDY SPACE CENTER, FLA. - KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building (VAB) that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
2004-09-14
KENNEDY SPACE CENTER, FLA. - KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
2004-09-14
KENNEDY SPACE CENTER, FLA. - KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
NASA Technical Reports Server (NTRS)
2004-01-01
KENNEDY SPACE CENTER, FLA. KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building (VAB) that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
2004-09-14
KENNEDY SPACE CENTER, FLA. - KSC videographer Glenn Benson adjusts a high definition camera being used to photograph the south wall of the Vehicle Assembly Building that sustained damage from Hurricane Frances as it passed over Central Florida during the Labor Day weekend. The maximum wind at the surface from Hurricane Frances was 94 mph from the northeast at 6:40 a.m. on Sunday, September 5. It was recorded at a weather tower located on the east shore of the Mosquito Lagoon near the Cape Canaveral National Seashore. The highest sustained wind at KSC was 68 mph. The VAB lost 820, 4- x 16-foot panels or more than 52,000 square feet of its surface. There was damage to the roof as well.
High-Resolution Mars Camera Test Image of Moon Infrared
2005-09-13
This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.
Endockscope: using mobile technology to create global point of service endoscopy.
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B; Dash, Atreya; Clayman, Ralph; Lee, Hak J
2013-09-01
Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy.
Endockscope: Using Mobile Technology to Create Global Point of Service Endoscopy
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B.; Dash, Atreya; Clayman, Ralph
2013-01-01
Abstract Background and Purpose Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Materials and Methods Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Results Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Conclusion Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy. PMID:23701228
The application of high-speed photography in z-pinch high-temperature plasma diagnostics
NASA Astrophysics Data System (ADS)
Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei
2007-01-01
This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.
DOT National Transportation Integrated Search
2015-08-01
Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533
SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
Standard design for National Ignition Facility x-ray streak and framing cameras.
Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S
2010-10-01
The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.
Meteor Film Recording with Digital Film Cameras with large CMOS Sensors
NASA Astrophysics Data System (ADS)
Slansky, P. C.
2016-12-01
In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.
Electronic cameras for low-light microscopy.
Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith
2013-01-01
This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yoo, C. M.; Joo, J.; Hyeong, K.; Chi, S. B.
2016-12-01
Manganese nodule, also known as polymetallic nodule, contains precious elements in high contents and is regarded as one of the most important future mineral resources. It occurs throughout the world oceans, but economically feasible deposits show limited distribution only in several deepsea basins including Clarion-Clipperton Fracture Zone (CCFZ) in northeast equatorial Pacific. Estimation of resources potential is one of the key factors prerequisite for economic feasibility study. Nodule abundance is commonly estimated from direct nodule sampling, however it is difficult to obtain statistically robust data because of highly variable spatial distribution and high cost of direct sampling. Variogram analysis indicates 3.5×3.5km sampling resolution to obtain indicated category of resources data, which requires over 1,000 sampling operations to cover the potential exploitation area with mining life of 20-30 years. High-resolution acoustic survey, bathymetry and back-scattered intensity, can provide high-resolution resources data with the definition of obstacles, such as faults and scarps, for operation of nodule collecting robots. We operated 120 kHz deep-tow side scan sonar (DTSSS) with spatial resolution of 1×1m in a representative area. Sea floor images were also taken continuously by deep-tow camera from selected tracks, converted to nodule abundance using image analysis program and conversion equation, and compared with acoustic data. Back-scattering intensity values could be divided into several group and translated into nodule abundance with high confidence level. Our result indicates that high resolution acoustic survey is appropriate tool for reliable assessment of manganese nodule abundance and definition of minable area.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wan, Chao; Yuan, Fuh-Gwo
2017-04-01
In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.
NASA Astrophysics Data System (ADS)
Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott
2003-09-01
A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.
Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, S.; Peter, M.
2017-05-01
In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.
Cone photoreceptor definition on adaptive optics retinal imaging
Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon
2014-01-01
Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. Methods High resolution retinal images were acquired from 10 healthy subjects, aged 20–35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. Results At 5° eccentricity, the cone density (cones/mm2 mean±SD) was 15.3±1.4×103 (automated) and 13.9±1.0×103 (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Conclusions Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. PMID:24729030
Assessment of skin wound healing with a multi-aperture camera
NASA Astrophysics Data System (ADS)
Nabili, Marjan; Libin, Alex; Kim, Loan; Groah, Susan; Ramella-Roman, Jessica C.
2009-02-01
A clinical trial was conducted at the National Rehabilitation Hospital on 15 individuals to assess whether Rheparan Skin, a bio-engineered component of the extracellular matrix of the skin, is effective at promoting healing of a variety of wounds. Along with standard clinical outcome measures, a spectroscopic camera was used to assess the efficacy of Rheparan skin. Gauzes soaked with Rheparan skin were placed on volunteers wounds for 5 minutes twice weekly for four weeks. Images of the wounds were taken using a multi spectral camera and a digital camera at baseline and weekly thereafter. Spectral images collected at different wavelengths were used combined with optical skin models to quantify parameters of interest such as oxygen saturation (SO2), water content, and melanin concentration. A digital wound measurement system (VERG) was also used to measure the size of the wound. 9 of the 15 measured subjects showed a definitive improvement post treatment in the form of a decrease in wound area. 7 of these 9 individuals also showed an increase in oxygen saturation in the ulcerated area during the trial. A similar trend was seen in other metrics. Spectral imaging of skin wound can be a valuable tool to establish wound-healing trends and to clarify healing mechanisms.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
NASA Astrophysics Data System (ADS)
Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute
1998-04-01
Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.
NASA Astrophysics Data System (ADS)
Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.
2011-12-01
The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.Cinematic camera emulation using two-dimensional color transforms
NASA Astrophysics Data System (ADS)
McElvain, Jon S.; Gish, Walter
2015-02-01
For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
NASA Astrophysics Data System (ADS)
Sun, Jiwen; Wei, Ling; Fu, Danying
2002-01-01
resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.
Performance benefits and limitations of a camera network
NASA Astrophysics Data System (ADS)
Carr, Peter; Thomas, Paul J.; Hornsey, Richard
2005-06-01
Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.
Application of infrared camera to bituminous concrete pavements: measuring vehicle
NASA Astrophysics Data System (ADS)
Janků, Michal; Stryk, Josef
2017-09-01
Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.
The kinematics of the California sea lion foreflipper during forward swimming.
Friedman, C; Leftwich, M C
2014-11-07
To determine the two-dimensional kinematics of the California sea lion foreflipper during thrust generation, a digital, high-definition video is obtained using a non-research female sea lion at the Smithsonian National Zoological Park in Washington, DC. The observational videos are used to extract maneuvers of interest--forward acceleration from rest using the foreflippers and banked turns. Single camera videos are analyzed to digitize the flipper during the motions using 10 points spanning root to tip in each frame. Digitized shapes were then fitted with an empirical function that quantitatively allows for both comparison between different claps, and for extracting kinematic data. The resulting function shows a high degree of curvature (with a camber of up to 32%). Analysis of sea lion acceleration from rest shows thrust production in the range of 150-680 N and maximum flipper angular velocity (for rotation about the shoulder joint) as high as 20 rad s⁻¹. Analysis of turning maneuvers indicate extreme agility and precision of movement driven by the foreflipper surfaces.
Toward an image compression algorithm for the high-resolution electronic still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.
NASA Astrophysics Data System (ADS)
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo
2018-06-01
We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
The effect of microchannel plate gain depression on PAPA photon counting cameras
NASA Astrophysics Data System (ADS)
Sams, Bruce J., III
1991-03-01
PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras
2017-10-01
ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High -Speed Video...Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High -Speed Video Cameras 5a. CONTRACT
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
NASA Technical Reports Server (NTRS)
Norgard, John D.
2012-01-01
For future NASA Manned Space Exploration of the Moon and Mars, a blunt body capsule, called the Orion Crew Exploration Vehicle (CEV), composed of a Crew Module (CM) and a Service Module (SM), with a parachute decent assembly is planned for reentry back to Earth. A Capsule Parachute Assembly System (CPAS) is being developed for preliminary parachute drop tests at the Yuma Proving Ground (YPG) to simulate high-speed reentry to Earth from beyond Low-Earth-Orbit (LEO) and to provide measurements of landing parameters and parachute loads. The avionics systems on CPAS also provide mission critical firing events to deploy, reef, and release the parachutes in three stages (extraction, drogues, mains) using mortars and pressure cartridge assemblies. In addition, a Mid-Air Delivery System (MDS) is used to separate the capsule from the sled that is used to eject the capsule from the back of the drop plane. Also, high-speed and high-definition cameras in a Video Camera System (VCS) are used to film the drop plane extraction and parachute landing events. To verify Electromagnetic Compatibility (EMC) of the CPAS system from unintentional radiation, Electromagnetic Interference (EMI) measurements are being made inside a semi-anechoic chamber at NASA/JSC at 1m from the electronic components of the CPAS system. In addition, EMI measurements of the integrated CPAS system are being made inside a hanger at YPG. These near-field B-Dot probe measurements on the surface of a parachute simulator (DART) are being extrapolated outward to the 1m standard distance for comparison to the MIL-STD radiated emissions limit.
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
Towards continuous monitoring of pulse rate in neonatal intensive care unit with a webcam.
Mestha, Lalit K; Kyal, Survi; Xu, Beilei; Lewis, Leslie Edward; Kumar, Vijay
2014-01-01
We describe a novel method to monitor pulse rate (PR) on a continuous basis of patients in a neonatal intensive care unit (NICU) using videos taken from a high definition (HD) webcam. We describe algorithms that determine PR from videoplethysmographic (VPG) signals extracted from multiple regions of interest (ROI) simultaneously available within the field of view of the camera where cardiac signal is registered. We detect motion from video images and compensate for motion artifacts from each ROI. Preliminary clinical results are presented on 8 neonates each with 30 minutes of uninterrupted video. Comparisons to hospital equipment indicate that the proposed technology can meet medical industry standards and give improved patient comfort and ease of use for practitioners when instrumented with proper hardware.
Space Infrared Telescope Facility (SIRTF) science instruments
NASA Technical Reports Server (NTRS)
Ramos, R.; Hing, S. M.; Leidich, C. A.; Fazio, G.; Houck, J. R.
1989-01-01
Concepts of scientific instruments designed to perform infrared astronomical tasks such as imaging, photometry, and spectroscopy are discussed as part of the Space Infrared Telescope Facility (SIRTF) project under definition study at NASA/Ames Research Center. The instruments are: the multiband imaging photometer, the infrared array camera, and the infrared spectograph. SIRTF, a cryogenically cooled infrared telescope in the 1-meter range and wavelengths as short as 2.5 microns carrying multiple instruments with high sensitivity and low background performance, provides the capability to carry out basic astronomical investigations such as deep search for very distant protogalaxies, quasi-stellar objects, and missing mass; infrared emission from galaxies; star formation and the interstellar medium; and the composition and structure of the atmospheres of the outer planets in the solar sytem.
Spatio-temporal patterns of sediment particle movement on 2D and 3D bedforms
NASA Astrophysics Data System (ADS)
Tsubaki, Ryota; Baranya, Sándor; Muste, Marian; Toda, Yuji
2018-06-01
An experimental study was conducted to explore sediment particle motion in an open channel and its relationship to bedform characteristics. High-definition submersed video cameras were utilized to record images of particle motion over a dune's length scale. Image processing was conducted to account for illumination heterogeneity due to bedform geometric irregularity and light reflection at the water's surface. Identification of moving particles using a customized algorithm was subsequently conducted and then the instantaneous velocity distribution of sediment particles was evaluated using particle image velocimetry. Obtained experimental results indicate that the motion of sediment particles atop dunes differs depending on dune geometry (i.e., two-dimensional or three-dimensional, respectively). Sediment motion and its relationship to dune shape and dynamics are also discussed.
CZT Detector Development for New Generation Hard-X Astronomical Instruments
NASA Astrophysics Data System (ADS)
Uslenghi, Michela; Conti, Giancarlo; D'Angelo, Sergio; Fiorini, Mauro; Quadrini, Egidio M.; Natalucci, Lorenzo; Ubertini, Pietro
2006-04-01
In the context of the definition of a future European gamma-ray mission, following the now on-orbit INTEGRAL observatory, we are carrying out a feasibility study on a Gamma Ray Wide Field Camera (5-500 KeV) for transient event detection. Recent achievements in high energy astronomy have validated the CZT detectors performances in terms of good spatial resolution, detection efficiency, energy resolution and low noise at room temperature. We started a development program aimed to explore the possibilities to improve and optimize the performance of this kind of detectors, acting at the level of both the readout system and crystal quality. Preliminary results of characterization of pixelated crystals provided by IMARAD (now Orbotech) are presented, along with their analysis and interpretation based on an analytical model of signal formation.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan
2015-11-01
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
HALO: a reconfigurable image enhancement and multisensor fusion system
NASA Astrophysics Data System (ADS)
Wu, F.; Hickman, D. L.; Parker, Steve J.
2014-06-01
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
Microchannel plate streak camera
Wang, Ching L.
1989-01-01
An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation
NASA Technical Reports Server (NTRS)
Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Arthur, Jarvis J.; Prinzel, Lawrence, III; Bailey, Randall E.
2014-01-01
NASA's Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today's aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley's UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight - one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
A novel simultaneous streak and framing camera without principle errors
NASA Astrophysics Data System (ADS)
Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.
2018-02-01
A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
Camera calibration for multidirectional flame chemiluminescence tomography
NASA Astrophysics Data System (ADS)
Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun
2017-04-01
Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
Optical designs for the Mars '03 rover cameras
NASA Astrophysics Data System (ADS)
Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.
2001-12-01
In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.
Single-pixel camera with one graphene photodetector.
Li, Gongxin; Wang, Wenxue; Wang, Yuechao; Yang, Wenguang; Liu, Lianqing
2016-01-11
Consumer cameras in the megapixel range are ubiquitous, but the improvement of them is hindered by the poor performance and high cost of traditional photodetectors. Graphene, a two-dimensional micro-/nano-material, recently has exhibited exceptional properties as a sensing element in a photodetector over traditional materials. However, it is difficult to fabricate a large-scale array of graphene photodetectors to replace the traditional photodetector array. To take full advantage of the unique characteristics of the graphene photodetector, in this study we integrated a graphene photodetector in a single-pixel camera based on compressive sensing. To begin with, we introduced a method called laser scribing for fabrication the graphene. It produces the graphene components in arbitrary patterns more quickly without photoresist contamination as do traditional methods. Next, we proposed a system for calibrating the optoelectrical properties of micro/nano photodetectors based on a digital micromirror device (DMD), which changes the light intensity by controlling the number of individual micromirrors positioned at + 12°. The calibration sensitivity is driven by the sum of all micromirrors of the DMD and can be as high as 10(-5)A/W. Finally, the single-pixel camera integrated with one graphene photodetector was used to recover a static image to demonstrate the feasibility of the single-pixel imaging system with the graphene photodetector. A high-resolution image can be recovered with the camera at a sampling rate much less than Nyquist rate. The study was the first demonstration for ever record of a macroscopic camera with a graphene photodetector. The camera has the potential for high-speed and high-resolution imaging at much less cost than traditional megapixel cameras.
High-speed line-scan camera with digital time delay integration
NASA Astrophysics Data System (ADS)
Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.
A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA
NASA Astrophysics Data System (ADS)
Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred
2016-08-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.
Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen
2015-09-21
Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.
NASA Astrophysics Data System (ADS)
Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.
2014-12-01
This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.
The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity
NASA Astrophysics Data System (ADS)
Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.
2009-08-01
The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.
Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD
NASA Astrophysics Data System (ADS)
Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.
2006-02-01
We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.
Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor
NASA Astrophysics Data System (ADS)
Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui
2016-11-01
A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.
Cone photoreceptor definition on adaptive optics retinal imaging.
Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon
2014-08-01
To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. High resolution retinal images were acquired from 10 healthy subjects, aged 20-35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. At 5° eccentricity, the cone density (cones/mm(2) mean±SD) was 15.3±1.4×10(3) (automated) and 13.9±1.0×10(3) (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
Color (RGB) imaging laser radar
NASA Astrophysics Data System (ADS)
Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.
2008-03-01
We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras
NASA Astrophysics Data System (ADS)
Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.
2015-08-01
The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
NASA Astrophysics Data System (ADS)
Steinmetz, Klaus
1995-05-01
Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.
Development of a Compact & Easy-to-Use 3-D Camera for High Speed Turbulent Flow Fields
2013-12-05
resolved. Also, in the case of a single camera system, the use of an aperture greatly reduces the amount of collected light. The combination of these...a study on wall-bounded turbulence [Sheng_2006]. Nevertheless, these techniques are limited to small measurement volumes, while maintaining a high...It has also been adapted to kHz rates using high-speed cameras for aeroacoustic studies (see Violato et al. [17, 18]. Tomo-PIV, however, has some
SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications
NASA Astrophysics Data System (ADS)
Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.
2005-08-01
A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.
Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)
NASA Astrophysics Data System (ADS)
MoIIberg, Bernard H.
1981-11-01
The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng
2016-10-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.
STS-47 MS Apt with LINHOF camera on JSC's Bldg 1 rooftop during training
NASA Technical Reports Server (NTRS)
1992-01-01
STS-47 Endeavour, Orbiter Vehicle (OV) 105, Mission Specialist (MS) Jerome Apt sets LINHOF camera lens during photography training session conducted on JSC's Project Management Building Bldg 1 rooftop. Using such a high vantage point as this nine-floor facility, Apt was able to become familiar with Earth Observations camera hadware such as the LINHOF camera.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Intimate partner violence, technology, and stalking.
Southworth, Cynthia; Finn, Jerry; Dawson, Shawndell; Fraser, Cynthia; Tucker, Sarah
2007-08-01
This research note describes the use of a broad range of technologies in intimate partner stalking, including cordless and cellular telephones, fax machines, e-mail, Internet-based harassment, global positioning systems, spy ware, video cameras, and online databases. The concept of "stalking with technology" is reviewed, and the need for an expanded definition of cyberstalking is presented. Legal issues and advocacy-centered responses, including training, legal remedies, public policy issues, and technology industry practices, are discussed.
Vann, C.
1998-03-24
The Laser Pulse Sampler (LPS) measures temporal pulse shape without the problems of a streak camera. Unlike the streak camera, the laser pulse directly illuminates a camera in the LPS, i.e., no additional equipment or energy conversions are required. The LPS has several advantages over streak cameras. The dynamic range of the LPS is limited only by the range of its camera, which for a cooled camera can be as high as 16 bits, i.e., 65,536. The LPS costs less because there are fewer components, and those components can be mass produced. The LPS is easier to calibrate and maintain because there is only one energy conversion, i.e., photons to electrons, in the camera. 5 figs.
Development of real-time extensometer based on image processing
NASA Astrophysics Data System (ADS)
Adinanta, H.; Puranto, P.; Suryadi
2017-04-01
An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.
Transillumination and reflectance probes for in vivo near-IR imaging of dental caries
NASA Astrophysics Data System (ADS)
Simon, Jacob C.; Lucas, Seth A.; Staninec, Michal; Tom, Henry; Chan, Kenneth H.; Darling, Cynthia L.; Fried, Daniel
2014-02-01
Previous studies have demonstrated the utility of near infrared (NIR) imaging for caries detection employing transillumination and reflectance imaging geometries. Three intra-oral NIR imaging probes were fabricated for the acquisition of in vivo, real time videos using a high definition InGaAs SWIR camera and near-IR broadband light sources. Two transillumination probes provide occlusal and interproximal images using 1300-nm light where water absorption is low and enamel manifests the highest transparency. A third reflectance probe utilizes cross polarization and operates at >1500-nm, where water absorption is higher which reduces the reflectivity of sound tissues, significantly increasing lesion contrast. These probes are being used in an ongoing clinical study to assess the diagnostic performance of NIR imaging for the detection of caries lesions in teeth scheduled for extraction for orthodontic reasons.
Driver crash risk factors and prevalence evaluation using naturalistic driving data.
Dingus, Thomas A; Guo, Feng; Lee, Suzie; Antin, Jonathan F; Perez, Miguel; Buchanan-King, Mindy; Hankey, Jonathan
2016-03-08
The accurate evaluation of crash causal factors can provide fundamental information for effective transportation policy, vehicle design, and driver education. Naturalistic driving (ND) data collected with multiple onboard video cameras and sensors provide a unique opportunity to evaluate risk factors during the seconds leading up to a crash. This paper uses a National Academy of Sciences-sponsored ND dataset comprising 905 injurious and property damage crash events, the magnitude of which allows the first direct analysis (to our knowledge) of causal factors using crashes only. The results show that crash causation has shifted dramatically in recent years, with driver-related factors (i.e., error, impairment, fatigue, and distraction) present in almost 90% of crashes. The results also definitively show that distraction is detrimental to driver safety, with handheld electronic devices having high use rates and risk.
Driver crash risk factors and prevalence evaluation using naturalistic driving data
Dingus, Thomas A.; Guo, Feng; Lee, Suzie; Antin, Jonathan F.; Perez, Miguel; Buchanan-King, Mindy; Hankey, Jonathan
2016-01-01
The accurate evaluation of crash causal factors can provide fundamental information for effective transportation policy, vehicle design, and driver education. Naturalistic driving (ND) data collected with multiple onboard video cameras and sensors provide a unique opportunity to evaluate risk factors during the seconds leading up to a crash. This paper uses a National Academy of Sciences-sponsored ND dataset comprising 905 injurious and property damage crash events, the magnitude of which allows the first direct analysis (to our knowledge) of causal factors using crashes only. The results show that crash causation has shifted dramatically in recent years, with driver-related factors (i.e., error, impairment, fatigue, and distraction) present in almost 90% of crashes. The results also definitively show that distraction is detrimental to driver safety, with handheld electronic devices having high use rates and risk. PMID:26903657
The Orbiter camera payload system's large-format camera and attitude reference system
NASA Technical Reports Server (NTRS)
Schardt, B. B.; Mollberg, B. H.
1985-01-01
The Orbiter camera payload system (OCPS) is an integrated photographic system carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a large-format camera (LFC), a precision wide-angle cartographic instrument capable of producing high-resolution stereophotography of great geometric fidelity in multiple base-to-height ratios. A secondary and supporting system to the LFC is the attitude reference system (ARS), a dual-lens stellar camera array (SCA) and camera support structure. The SCA is a 70 mm film system that is rigidly mounted to the LFC lens support structure and, through the simultaneous acquisition of two star fields with each earth viewing LFC frame, makes it possible to precisely determine the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high-precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment. The full OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on Oct. 5, 1984, as a major payload aboard the STS-41G mission.
Multispectral image dissector camera flight test
NASA Technical Reports Server (NTRS)
Johnson, B. L.
1973-01-01
It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.
Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras
NASA Astrophysics Data System (ADS)
Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.
2013-12-01
The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB mode (red, green, blue) and compared them with the data provided by the black and white cameras for the same event and the influence of these parameters with the luminosity intensity of the flashes. Two peculiar cases presented, from the data obtained at one site, a stroke, some continuing current during the interval between the strokes and, then, a subsequent stroke; however, the other site showed that the subsequent stroke was in fact an M-component, since the continuing current had not vanished after its parent stroke. These events generated a dubious classification for the same event that was based only in a visual analysis with high-speed cameras and they were analyzed in this work.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Modeling of digital information optical encryption system with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.
2015-10-01
State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.
Infrared Imaging Camera Final Report CRADA No. TC02061.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E. V.; Nebeker, S.
This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less
High-speed autoverifying technology for printed wiring boards
NASA Astrophysics Data System (ADS)
Ando, Moritoshi; Oka, Hiroshi; Okada, Hideo; Sakashita, Yorihiro; Shibutani, Nobumi
1996-10-01
We have developed an automated pattern verification technique. The output of an automated optical inspection system contains many false alarms. Verification is needed to distinguish between minor irregularities and serious defects. In the past, this verification was usually done manually, which led to unsatisfactory product quality. The goal of our new automated verification system is to detect pattern features on surface mount technology boards. In our system, we employ a new illumination method, which uses multiple colors and multiple direction illumination. Images are captured with a CCD camera. We have developed a new algorithm that uses CAD data for both pattern matching and pattern structure determination. This helps to search for patterns around a defect and to examine defect definition rules. These are processed with a high speed workstation and a hard-wired circuits. The system can verify a defect within 1.5 seconds. The verification system was tested in a factory. It verified 1,500 defective samples and detected all significant defects with only a 0.1 percent of error rate (false alarm).
Live HDR video streaming on commodity hardware
NASA Astrophysics Data System (ADS)
McNamee, Joshua; Hatchett, Jonathan; Debattista, Kurt; Chalmers, Alan
2015-09-01
High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras
NASA Technical Reports Server (NTRS)
Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike
2015-01-01
The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garfield, B.R.; Rendell, J.T.
1991-01-01
The present conference discusses the application of schlieren photography in industry, laser fiber-optic high speed photography, holographic visualization of hypervelocity explosions, sub-100-picosec X-ray grating cameras, flash soft X-radiography, a novel approach to synchroballistic photography, a programmable image converter framing camera, high speed readout CCDs, an ultrafast optomechanical camera, a femtosec streak tube, a modular streak camera for laser ranging, and human-movement analysis with real-time imaging. Also discussed are high-speed photography of high-resolution moire patterns, a 2D electron-bombarded CCD readout for picosec electrooptical data, laser-generated plasma X-ray diagnostics, 3D shape restoration with virtual grating phase detection, Cu vapor lasers for highmore » speed photography, a two-frequency picosec laser with electrooptical feedback, the conversion of schlieren systems to high speed interferometers, laser-induced cavitation bubbles, stereo holographic cinematography, a gatable photonic detector, and laser generation of Stoneley waves at liquid-solid boundaries.« less
Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya
Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less
Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor
Dragone, Angelo; Kenney, Chris; Lozinskaya, Anastassiya; ...
2016-11-29
Here, we describe a multilayer stacked X-ray camera concept. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detectionmore » [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.« less
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
Electronic camera-management system for 35-mm and 70-mm film cameras
NASA Astrophysics Data System (ADS)
Nielsen, Allan
1993-01-01
Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.
Calibration Procedures in Mid Format Camera Setups
NASA Astrophysics Data System (ADS)
Pivnicka, F.; Kemper, G.; Geissler, S.
2012-07-01
A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied. However, there is a misalignment (bore side angle) that must be evaluated by photogrammetric process using advanced tools e.g. in Bingo. Once, all these parameters have been determined, the system is capable for projects without or with only a few ground control points. But which effect has the photogrammetric process when directly applying the achieved direct orientation values compared with an AT based on a proper tiepoint matching? The paper aims to show the steps to be done by potential users and gives a kind of quality estimation about the importance and quality influence of the various calibration and adjustment steps.
NASA Astrophysics Data System (ADS)
Cucci, Costanza; Casini, Andrea; Stefani, Lorenzo; Picollo, Marcello; Jussila, Jouni
2017-07-01
For more than a decade, a number of studies and research projects have been devoted to customize hyperspectral imaging techniques to the specific needs of conservation and applications in museum context. A growing scientific literature definitely demonstrated the effectiveness of reflectance hyperspectral imaging for non-invasive diagnostics and highquality documentation of 2D artworks. Additional published studies tackle the problems of data-processing, with a focus on the development of algorithms and software platforms optimised for visualisation and exploitation of hyperspectral bigdata sets acquired on paintings. This scenario proves that, also in the field of Cultural Heritage (CH), reflectance hyperspectral imaging has nowadays reached the stage of mature technology, and is ready for the transition from the R&D phase to the large-scale applications. In view of that, a novel concept of hyperspectral camera - featuring compactness, lightness and good usability - has been developed by SPECIM, Spectral Imaging Ltd. (Oulu, Finland), a company in manufacturing products for hyperspectral imaging. The camera is proposed as new tool for novel applications in the field of Cultural Heritage. The novelty of this device relies in its reduced dimensions and weight and in its user-friendly interface, which make this camera much more manageable and affordable than conventional hyperspectral instrumentation. The camera operates in the 400-1000nm spectral range and can be mounted on a tripod. It can operate from short-distance (tens of cm) to long distances (tens of meters) with different spatial resolutions. The first release of the prototype underwent a preliminary in-depth experimentation at the IFAC-CNR laboratories. This paper illustrates the feasibility study carried out on the new SPECIM hyperspectral camera, tested under different conditions on laboratory targets and artworks with the specific aim of defining its potentialities and weaknesses in its use in the Cultural Heritage field.
Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio
2014-11-01
We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.
Control system for several rotating mirror camera synchronization operation
NASA Astrophysics Data System (ADS)
Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji
1997-05-01
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han
2016-01-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
Vemmer, T; Steinbüchel, C; Bertram, J; Eschner, W; Kögler, A; Luig, H
1997-03-01
The purpose of this study was to determine whether data acquisition in the list mode and iterative tomographic reconstruction would render feasible cardiac phase-synchronized thallium-201 single-photon emission tomography (SPET) of the myocardium under routine conditions without modifications in tracer dose, acquisition time, or number of steps of the a gamma camera. Seventy non-selected patients underwent 201T1 SPET imaging according to a routine protocol (74 MBq/2 mCi 201T1, 180 degrees rotation of the gamma camera, 32 steps, 30 min). Gamma camera data, ECG, and a time signal were recorded in list mode. The cardiac cycle was divided into eight phases, the end-diastolic phase encompassing the QRS complex, and the end-systolic phase the T wave. Both phase- and non-phase-synchronized tomograms based on the same list mode data were reconstructed iteratively. Phase-synchronized and non-synchronized images were compared. Patients were divided into two groups depending on whether or not coronary artery disease had been definitely diagnosed prior to SPET imaging. The numbers of patients in both groups demonstrating defects visible on the phase-synchronized but not on the non-synchronized images were compared. It was found that both postexercise and redistribution phase tomograms were suited for interpretation. The changes from end-diastolic to end-systolic images allowed a comparative assessment of regional wall motility and tracer uptake. End-diastolic tomograms provided the best definition of defects. Additional defects not apparent on non-synchronized images were visible in 40 patients, six of whom did not show any defect on the non-synchronized images. Of 42 patients in whom coronary artery disease had been definitely diagnosed, 19 had additional defects not visible on the non-synchronized images, in comparison to 21 of 28 in whom coronary artery disease was suspected (P < 0.02; chi 2). It is concluded that cardiac phase-synchronized 201T1 SPET of the myocardium was made feasible by list mode data acquisition and iterative reconstruction. The additional findings on the phase-synchronized tomograms, not visible on the non-synchronized ones, represented genuine defects. Cardiac phase-synchronized 201T1 SPET is advantageous in allowing simultaneous assessment of regional wall motion and tracer uptake, and in visualizing smaller defects.
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Spruce, Joseph
2010-01-01
An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.
2015-10-01
Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.
Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L
2011-03-01
Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
Optimization of dynamic envelope measurement system for high speed train based on monocular vision
NASA Astrophysics Data System (ADS)
Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong
2018-01-01
The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.
NASA Technical Reports Server (NTRS)
2007-01-01
Topics include: Wearable Environmental and Physiological Sensing Unit; Broadband Phase Retrieval for Image-Based Wavefront Sensing; Filter Function for Wavefront Sensing Over a Field of View; Iterative-Transform Phase Retrieval Using Adaptive Diversity; Wavefront Sensing With Switched Lenses for Defocus Diversity; Smooth Phase Interpolated Keying; Maintaining Stability During a Conducted-Ripple EMC Test; Photodiode Preamplifier for Laser Ranging With Weak Signals; Advanced High-Definition Video Cameras; Circuit for Full Charging of Series Lithium-Ion Cells; Analog Nonvolatile Computer Memory Circuits; JavaGenes Molecular Evolution; World Wind 3D Earth Viewing; Lithium Dinitramide as an Additive in Lithium Power Cells; Accounting for Uncertainties in Strengths of SiC MEMS Parts; Ion-Conducting Organic/Inorganic Polymers; MoO3 Cathodes for High-Temperature Lithium Thin-Film Cells; Counterrotating-Shoulder Mechanism for Friction Stir Welding; Strain Gauges Indicate Differential-CTE-Induced Failures; Antibodies Against Three Forms of Urokinase; Understanding and Counteracting Fatigue in Flight Crews; Active Correction of Aberrations of Low-Quality Telescope Optics; Dual-Beam Atom Laser Driven by Spinor Dynamics; Rugged, Tunable Extended-Cavity Diode Laser; Balloon for Long-Duration, High-Altitude Flight at Venus; and Wide-Temperature-Range Integrated Operational Amplifier.
A high-speed digital camera system for the observation of rapid H-alpha fluctuations in solar flares
NASA Technical Reports Server (NTRS)
Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.
1989-01-01
Researchers developed a prototype digital camera system for obtaining H-alpha images of solar flares with 0.1 s time resolution. They intend to operate this system in conjunction with SMM's Hard X Ray Burst Spectrometer, with x ray instruments which will be available on the Gamma Ray Observatory and eventually with the Gamma Ray Imaging Device (GRID), and with the High Resolution Gamma-Ray and Hard X Ray Spectrometer (HIREGS) which are being developed for the Max '91 program. The digital camera has recently proven to be successful as a one camera system operating in the blue wing of H-alpha during the first Max '91 campaign. Construction and procurement of a second and possibly a third camera for simultaneous observations at other wavelengths are underway as are analyses of the campaign data.
Generation of animation sequences of three dimensional models
NASA Technical Reports Server (NTRS)
Poi, Sharon (Inventor); Bell, Brad N. (Inventor)
1990-01-01
The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
High-precision method of binocular camera calibration with a distortion model.
Li, Weimin; Shan, Siyu; Liu, Hui
2017-03-10
A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.
NICMOS PEERS INTO HEART OF DYING STAR
NASA Technical Reports Server (NTRS)
2002-01-01
The Egg Nebula, also known as CRL 2688, is shown on the left as it appears in visible light with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2) and on the right as it appears in infrared light with Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Since infrared light is invisible to humans, the NICMOS image has been assigned colors to distinguish different wavelengths: blue corresponds to starlight reflected by dust particles, and red corresponds to heat radiation emitted by hot molecular hydrogen. Objects like the Egg Nebula are helping astronomers understand how stars like our Sun expel carbon and nitrogen -- elements crucial for life -- into space. Studies on the Egg Nebula show that these dying stars eject matter at high speeds along a preferred axis and may even have multiple jet-like outflows. The signature of the collision between this fast-moving material and the slower outflowing shells is the glow of hydrogen molecules captured in the NICMOS image. The distance between the tip of each jet is approximately 200 times the diameter of our solar system (out to Pluto's orbit). Credits: Rodger Thompson, Marcia Rieke, Glenn Schneider, Dean Hines (University of Arizona); Raghvendra Sahai (Jet Propulsion Laboratory); NICMOS Instrument Definition Team; and NASA Image files in GIF and JPEG format and captions may be accessed on the Internet via anonymous ftp from ftp.stsci.edu in /pubinfo.
Nissen, Nicholas N; Menon, Vijay; Williams, James; Berci, George
2011-01-01
Background The use of loupe magnification during complex hepatobiliary and pancreatic (HBP) surgery has become routine. Unfortunately, loupe magnification has several disadvantages including limited magnification, a fixed field and non-variable magnification parameters. The aim of this report is to describe a simple system of video-microscopy for use in open surgery as an alternative to loupe magnification. Methods In video-microscopy, the operative field is displayed on a TV monitor using a high-definition (HD) camera with a special optic mounted on an adjustable mechanical arm. The set-up and application of this system are described and illustrated using examples drawn from pancreaticoduodenectomy, bile duct repair and liver transplantation. Results This system is easy to use and can provide variable magnification of ×4–12 at a camera distance of 25–35 cm from the operative field and a depth of field of 15 mm. This system allows the surgeon and assistant to work from a HD TV screen during critical phases of microsurgery. Conclusions The system described here provides better magnification than loupe lenses and thus may be beneficial during complex HPB procedures. Other benefits of this system include the fact that its use decreases neck strain and postural fatigue in the surgeon and it can be used as a tool for documentation and teaching. PMID:21929677
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Rigante, M; La Rocca, G; Lauretti, L; D'Alessandris, G Q; Mangiola, A; Anile, C; Olivi, A; Paludetti, G
2017-06-01
During the last two decades endoscopic skull base surgery observed a continuous technical and technological development 3D endoscopy and ultra High Definition (HD) endoscopy have provided great advances in terms of visualisation and spatial resolution. Ultra-high definition (UHD) 4K systems, recently introduced in the clinical practice, will shape next steps forward especially in skull base surgery field. Patients were operated on through transnasal transsphenoidal endoscopic approaches performed using Olympus NBI 4K UHD endoscope with a 4 mm 0° Ultra Telescope, 300 W xenon lamp (CLV-S400) predisposed for narrow band imaging (NBI) technology connected through a camera head to a high-quality control unit (OTV-S400 - VISERA 4K UHD) (Olympus Corporation, Tokyo, Japan). Two screens are used, one 31" Monitor - (LMD-X310S) and one main ultra-HD 55" screen optimised for UHD image reproduction (LMD-X550S). In selected cases, we used a navigation system (Stealthstation S7, Medtronic, Minneapolis, MN, US). We evaluated 22 pituitary adenomas (86.3% macroadenomas; 13.7% microadenomas). 50% were not functional (NF), 22.8% GH, 18.2% ACTH, 9% PRL-secreting. Three of 22 were recurrences. In 91% of cases we achieved total removal, while in 9% near total resection. A mean follow-up of 187 days and average length of hospitalisation was 3.09 ± 0.61 days. Surgical duration was 128.18± 30.74 minutes. We experienced only 1 case of intraoperative low flow fistula with no further complications. None of the cases required any post- or intraoperative blood transfusion. The visualisation and high resolution of the operative field provided a very detailed view of all anatomical structures and pathologies allowing an improvement in safety and efficacy of the surgical procedure. The operative time was similar to the standard 2D HD and 3D procedures and the physical strain was also comparable to others in terms of ergonomics and weight. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Commercially available high-speed system for recording and monitoring vocal fold vibrations.
Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya
2009-12-01
We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.
Improving depth maps of plants by using a set of five cameras
NASA Astrophysics Data System (ADS)
Kaczmarek, Adam L.
2015-03-01
Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.
Tests of commercial colour CMOS cameras for astronomical applications
NASA Astrophysics Data System (ADS)
Pokhvala, S. M.; Reshetnyk, V. M.; Zhilyaev, B. E.
2013-12-01
We present some results of testing commercial colour CMOS cameras for astronomical applications. Colour CMOS sensors allow to perform photometry in three filters simultaneously that gives a great advantage compared with monochrome CCD detectors. The Bayer BGR colour system realized in colour CMOS sensors is close to the astronomical Johnson BVR system. The basic camera characteristics: read noise (e^{-}/pix), thermal noise (e^{-}/pix/sec) and electronic gain (e^{-}/ADU) for the commercial digital camera Canon 5D MarkIII are presented. We give the same characteristics for the scientific high performance cooled CCD camera system ALTA E47. Comparing results for tests of Canon 5D MarkIII and CCD ALTA E47 show that present-day commercial colour CMOS cameras can seriously compete with the scientific CCD cameras in deep astronomical imaging.
ColorChecker at the beach: dangers of sunburn and glare
NASA Astrophysics Data System (ADS)
McCann, John
2014-01-01
In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
Camera artifacts in IUE spectra
NASA Technical Reports Server (NTRS)
Bruegman, O. W.; Crenshaw, D. M.
1994-01-01
This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.
Camera Ready: Capturing a Digital History of Chester
ERIC Educational Resources Information Center
Lehman, Kathy
2008-01-01
Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD.…
Transmission electron microscope CCD camera
Downing, Kenneth H.
1999-01-01
In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
An image compression algorithm for a high-resolution digital still camera
NASA Technical Reports Server (NTRS)
Nerheim, Rosalee
1989-01-01
The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites
NASA Astrophysics Data System (ADS)
Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc
2017-11-01
As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini-satellite class platform.
Thermographic measurements of high-speed metal cutting
NASA Astrophysics Data System (ADS)
Mueller, Bernhard; Renz, Ulrich
2002-03-01
Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
NASA Astrophysics Data System (ADS)
Hyzer, W. G.
1981-10-01
Significant advances in high-speed camera technology are being made in the Union of Soviet Socialist Republics (USSR) and People's Republic of China (PRC), which were revealed to the author during recent visits to both of these countries. Past and present developments in high-speed cameras are described in this paper based on personal observations by the author and on private communications with other technical observers. Detailed specifications on individual instruments are presented in those specific cases where such information has been revealed and could be verified.
Comparison of 10 digital SLR cameras for orthodontic photography.
Bister, D; Mordarai, F; Aveling, R M
2006-09-01
Digital photography is now widely used to document orthodontic patients. High quality intra-oral photography depends on a satisfactory 'depth of field' focus and good illumination. Automatic 'through the lens' (TTL) metering is ideal to achieve both the above aims. Ten current digital single lens reflex (SLR) cameras were tested for use in intra- and extra-oral photography as used in orthodontics. The manufacturers' recommended macro-lens and macro-flash were used with each camera. Handling characteristics, colour-reproducibility, quality of the viewfinder and flash recharge time were investigated. No camera took acceptable images in factory default setting or 'automatic' mode: this mode was not present for some cameras (Nikon, Fujifilm); led to overexposure (Olympus) or poor depth of field (Canon, Konica-Minolta, Pentax), particularly for intra-oral views. Once adjusted, only Olympus cameras were able to take intra- and extra-oral photographs without the need to change settings, and were therefore the easiest to use. All other cameras needed adjustments of aperture (Canon, Konica-Minolta, Pentax), or aperture and flash (Fujifilm, Nikon), making the latter the most complex to use. However, all cameras produced high quality intra- and extra-oral images, once appropriately adjusted. The resolution of the images is more than satisfactory for all cameras. There were significant differences relating to the quality of colour reproduction, size and brightness of the viewfinders. The Nikon D100 and Fujifilm S 3 Pro consistently scored best for colour fidelity. Pentax and Konica-Minolta had the largest and brightest viewfinders.
Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira
2013-01-01
Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.
Development of low-cost high-performance multispectral camera system at Banpil
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.
2014-05-01
Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.
2016-05-27
LAST YEAR, the Care Quality Commission issued guidance to families on using hidden cameras if they are concerned that their relatives are being abused or receiving poor care. Filming in care settings has also resulted in high profile prosecutions, and numerous TV documentaries. Joe Plomin, the author, was the undercover producer who exposed the abuse at Winterbourne View, near Bristol, in 2011.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Selecting the right digital camera for telemedicine-choice for 2009.
Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret
2010-03-01
Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W
2016-09-01
Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.
Optimum color filters for CCD digital cameras
NASA Astrophysics Data System (ADS)
Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl
1993-12-01
As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
San Juan National Forest Land Management Planning Support System (LMPSS) requirements definition
NASA Technical Reports Server (NTRS)
Werth, L. F. (Principal Investigator)
1981-01-01
The role of remote sensing data as it relates to a three-component land management planning system (geographic information, data base management, and planning model) can be understood only when user requirements are known. Personnel at the San Juan National Forest in southwestern Colorado were interviewed to determine data needs for managing and monitoring timber, rangelands, wildlife, fisheries, soils, water, geology and recreation facilities. While all the information required for land management planning cannot be obtained using remote sensing techniques, valuable information can be provided for the geographic information system. A wide range of sensors such as small and large format cameras, synthetic aperture radar, and LANDSAT data should be utilized. Because of the detail and accuracy required, high altitude color infrared photography should serve as the baseline data base and be supplemented and updated with data from the other sensors.
The Prisma Hyperspectra Mission
NASA Astrophysics Data System (ADS)
Loizzo, R.; Ananasso, C.; Guarini, R.; Lopinto, E.; Candela, L.; Pisani, A. R.
2016-08-01
PRISMA (PRecursore IperSpettrale della Missione Applicativa) is an Italian Space Agency (ASI) hyperspectral mission currently scheduled for the lunch in 2018. PRISMA is a single satellite placed on a sun- synchronous Low Earth Orbit (620 km altitude) with an expected operational lifetime of 5 years. The hyperspectral payload consists of a high spectral resolution (VNIR-SWIR) imaging spectrometer, optically integrated with a medium resolution Panchromatic camera. PRISMA will acquire data on areas of 30 km Swath width and with a Ground Sampling Distance (GSD) of 30 m (hyperspectral) and of 5 m Panchromatic (PAN). The PRISMA Ground Segment will be geographically distributed between Fucino station and ASI Matera Space Geodesy Centre and will include the Mission Control Centre, the Satellite Control Centre and the Instrument Data Handling System. The science community supports the overall lifecycle of the mission, being involved in algorithms definition, calibration and validation activities, research and applications development.
Rushinek, Avi; Rushinek, Sara; Lippincott, Christine; Ambrosia, Todd
2014-04-01
The aim of this article is to describe the repurposing of classroom video surveillance and on-screen archives (RCVSOSA) model, which is an innovative, technology-enabled approach to continuing education in nursing. The RCVSOSA model leverages network Internet-protocol, high-definition surveillance cameras to record videos of classroom lectures that can be automatically uploaded to the Internet or converted to DVD, either in their entirety or as content-specific modules, with the production work embedded in the technology. The proposed model supports health care continuing education through the use of online assessments for focused education modules, access to archived online recordings and DVD training courses, voice-to-text transcripts, and possibly continuing education modules that may be translated into multiple languages. Potential benefits of this model include increased access to educational modules for students, instant authorship, and financial compensation for instructors and their respective organizations.
NASA Technical Reports Server (NTRS)
Birisan, Mihnea; Beling, Peter
2011-01-01
New generations of surveillance drones are being outfitted with numerous high definition cameras. The rapid proliferation of fielded sensors and supporting capacity for processing and displaying data will translate into ever more capable platforms, but with increased capability comes increased complexity and scale that may diminish the usefulness of such platforms to human operators. We investigate methods for alleviating strain on analysts by automatically retrieving content specific to their current task using a machine learning technique known as Multi-Instance Learning (MIL). We use MIL to create a real time model of the analysts' task and subsequently use the model to dynamically retrieve relevant content. This paper presents results from a pilot experiment in which a computer agent is assigned analyst tasks such as identifying caravanning vehicles in a simulated vehicle traffic environment. We compare agent performance between MIL aided trials and unaided trials.
SOUTH WING, TRA661. SOUTH SIDE. CAMERA FACING NORTH. MTR HIGH ...
SOUTH WING, TRA-661. SOUTH SIDE. CAMERA FACING NORTH. MTR HIGH BAY BEYOND. INL NEGATIVE NO. HD46-45-3. Mike Crane, Photographer, 4/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
New Modular Camera No Ordinary Joe
NASA Technical Reports Server (NTRS)
2003-01-01
Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.
High-frame-rate infrared and visible cameras for test range instrumentation
NASA Astrophysics Data System (ADS)
Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.
1995-09-01
Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.
Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko
2006-01-01
The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.
Face recognition system for set-top box-based intelligent TV.
Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung
2014-11-18
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.
EAARL coastal topography--Alligator Point, Louisiana, 2010
Nayegandhi, Amar; Bonisteel-Cormier, J.M.; Wright, C.W.; Brock, J.C.; Nagle, D.B.; Vivekanandan, Saisudha; Fredericks, Xan; Barras, J.A.
2012-01-01
This project provides highly detailed and accurate datasets of a portion of Alligator Point, Louisiana, acquired on March 5 and 6, 2010. The datasets are made available for use as a management tool to research scientists and natural-resource managers. An innovative airborne lidar instrument originally developed at the National Aeronautics and Space Administration (NASA) Wallops Flight Facility, and known as the Experimental Advanced Airborne Research Lidar (EAARL), was used during data acquisition. The EAARL system is a raster-scanning, waveform-resolving, green-wavelength (532-nanometer) lidar designed to map near-shore bathymetry, topography, and vegetation structure simultaneously. The EAARL sensor suite includes the raster-scanning, water-penetrating full-waveform adaptive lidar, a down-looking red-green-blue (RGB) digital camera, a high-resolution multispectral color-infrared (CIR) camera, two precision dual-frequency kinematic carrier-phase GPS receivers, and an integrated miniature digital inertial measurement unit, which provide for sub-meter georeferencing of each laser sample. The nominal EAARL platform is a twin-engine aircraft, but the instrument was deployed on a Pilatus PC-6. A single pilot, a lidar operator, and a data analyst constitute the crew for most survey operations. This sensor has the potential to make significant contributions in measuring sub-aerial and submarine coastal topography within cross-environmental surveys. Elevation measurements were collected over the survey area using the EAARL system, and the resulting data were then processed using the Airborne Lidar Processing System (ALPS), a custom-built processing system developed in a NASA-USGS collaboration. ALPS supports the exploration and processing of lidar data in an interactive or batch mode. Modules for presurvey flight-line definition, flight-path plotting, lidar raster and waveform investigation, and digital camera image playback have been developed. Processing algorithms have been developed to extract the range to the first and last significant return within each waveform. ALPS is used routinely to create maps that represent submerged or sub-aerial topography. Specialized filtering algorithms have been implemented to determine the "bare earth" under vegetation from a point cloud of last return elevations.
Teleneonatology: a major tool for the future.
Minton, Stephen; Allan, Mark; Valdes, Wesley
2014-02-01
Hospitals have, for centuries, maintained a central position in the health care system, providing care for critically ill patients. Despite being a cornerstone of health care delivery, we are witnessing the beginning of a major transformation in their function. There are several forces driving this transformation, including health care costs, shortage of health care professionals, volume of people with chronic diseases, consumerism, health care reform, and hospital errors. The neonatal intensive care unit (NICU) at Utah Valley Regional Medical Center in Provo, Utah, began an aggressive redesign/quality improvement effort in 1990. It became obvious that our care processes were designed for health care deliverers and not for the families. An ongoing revamp of our care delivery processes was undertaken using significant input from a parent focus meeting, parental interviews, and development of a parent-to-parent support group. As a result of this work, it became obvious we needed a new model to truly empower parents. The idea of "NICU is Home" was born. We elected to make a mind shift, not to focus on what families think, but rather on how they think. Web cams and other video apparatus have been used in a number of NICUs across the country. We decided our equipment requirements would need to include high-resolution cameras, full high-definition video recording, autofocus, audio microphones, automatic noise reduction, and automatic low-light correction. Our conferencing software needed to accommodate multiple users and have multiple-picture capabilities, low band width, and inexpensive technology. It was recognized that a single video camera feed was insufficient to adequately capture the desired amount of information. Verbal communication between parents and their babies' principal care providers is critical. Parents loved the idea of expanding the remote NICU web cam of their baby to a two-way physician-parent communication bedside monitor. Doctors at Utah Valley Regional Medical Center now have a mobile desk using a WiFi computer/camera/audio to communicate with the family in real-time or leave a recording. Copyright 2014, SLACK Incorporated.
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Compact streak camera for the shock study of solids by using the high-pressure gas gun
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Mori, Yasuhito
1993-01-01
For the precise observation of high-speed impact phenomena, a compact high-speed streak camera recording system has been developed. The system consists of a high-pressure gas gun, a streak camera, and a long-pulse dye laser. The gas gun installed in our laboratory has a muzzle of 40 mm in diameter, and a launch tube of 2 m long. Projectile velocity is measured by the laser beam cut method. The gun is capable of accelerating a 27 g projectile up to 500 m/s, if helium gas is used as a driver. The system has been designed on the principal idea that the precise optical measurement methods developed in other areas of research can be applied to the gun study. The streak camera is 300 mm in diameter, with a rectangular rotating mirror which is driven by an air turbine spindle. The attainable streak velocity is 3 mm/microsecond(s) . The size of the camera is rather small aiming at the portability and economy. Therefore, the streak velocity is relatively slower than the fast cameras, but it is possible to use low-sensitivity but high-resolution film as a recording medium. We have also constructed a pulsed dye laser of 25 - 30 microsecond(s) in duration. The laser can be used as a light source of observation. The advantage for the use of the laser will be multi-fold, i.e., good directivity, almost single frequency, and so on. The feasibility of the system has been demonstrated by performing several experiments.
Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications
NASA Astrophysics Data System (ADS)
Olson, Gaylord G.; Walker, Jo N.
1997-09-01
Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.
Super-Resolution in Plenoptic Cameras Using FPGAs
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-01-01
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246
Super-resolution in plenoptic cameras using FPGAs.
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-05-16
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.
Improved spatial resolution of luminescence images acquired with a silicon line scanning camera
NASA Astrophysics Data System (ADS)
Teal, Anthony; Mitchell, Bernhard; Juhl, Mattias K.
2018-04-01
Luminescence imaging is currently being used to provide spatially resolved defect in high volume silicon solar cell production. One option to obtain the high throughput required for on the fly detection is the use a silicon line scan cameras. However, when using a silicon based camera, the spatial resolution is reduced as a result of the weakly absorbed light scattering within the camera's chip. This paper address this issue by applying deconvolution from a measured point spread function. This paper extends the methods for determining the point spread function of a silicon area camera to a line scan camera with charge transfer. The improvement in resolution is quantified in the Fourier domain and in spatial domain on an image of a multicrystalline silicon brick. It is found that light spreading beyond the active sensor area is significant in line scan sensors, but can be corrected for through normalization of the point spread function. The application of this method improves the raw data, allowing effective detection of the spatial resolution of defects in manufacturing.
Continuous monitoring of Hawaiian volcanoes with thermal cameras
Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.
2014-01-01
Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
Cameron, M. H.; Newstead, S. V.; Diamantopoulou, K.; Oxley, P.
2003-01-01
The objective was to measure the presence of any interaction between the effect of mobile covert speed camera enforcement and the effect of intensive mass media road safety publicity with speed-related themes. During 1999, the Victoria Police varied the levels of speed camera activity substantially in four Melbourne police districts according to a systematic plan. Camera hours were increased or reduced by 50% or 100% in respective districts for a month at a time, during months when speed-related publicity was present and during months when it was absent. Monthly frequencies of casualty crashes, and their severe injury outcome, in each district during 1996–2000 were analysed to test the effects of the enforcement, publicity and their interaction. Reductions in crash frequency were associated monotonically with increasing levels of speed camera ticketing, and there was a statistically significant 41% reduction in fatal crash outcome associated with very high camera activity. High publicity awareness was associated with 12% reduction in crash frequency. The interaction between the enforcement and publicity was not statistically significant. PMID:12941230
Photogrammetric calibration of the NASA-Wallops Island image intensifier system
NASA Technical Reports Server (NTRS)
Harp, B. F.
1972-01-01
An image intensifier was designed for use as one of the primary tracking systems for the barium cloud experiment at Wallops Island. Two computer programs, a definitive stellar camara calibration program and a geodetic stellar camara orientation program, were originally developed at Wallops on a GE 625 computer. A mathematical procedure for determining the image intensifier distortions is outlined, and the implementation of the model in the Wallops computer programs is described. The analytical calibration of metric cameras is also discussed.
Generating High resolution surfaces from images: when photogrammetry and applied geophysics meets
NASA Astrophysics Data System (ADS)
Bretar, F.; Pierrot-Deseilligny, M.; Schelstraete, D.; Martin, O.; Quernet, P.
2012-04-01
Airborne digital photogrammetry has been used for some years to create digital models of the Earth's topography from calibrated cameras. But, in the recent years, the use of non-professionnal digital cameras has become valuable to reconstruct topographic surfaces. Today, the multi megapixel resolution of non-professionnal digital cameras, either used in a close range configuration or from low altitude flights, provide a ground pixel size of respectively a fraction of millimeters to couple of centimeters. Such advances turned into reality because the data processing chain made a tremendous break through during the last five years. This study investigates the potential of the open source software MICMAC developed by the French National Survey IGN (http://www.micmac.ign.fr) to calibrate unoriented digital images and calculate surface models of extremely high resolution for Earth Science purpose. We would like to report two experiences performed in 2011. The first has been performed in the context of risk assessment of rock falls and landslides along the cliffs of Normandy seashore. The acquisition protocol for the first site of "Criel-sur-Mer" has been very simple: a walk along the chalk vertical cliffs taking photos with a focal of 18mm every approx. 50m with an overlap of 80% allowed to generate 2.5km of digital surface at centimeter resolution. The site of "Les Vaches Noires" has been more complicated to acquire because of both the geology (dark clays) and the geometry (the landslide direction is parallel to the seashore and has a high field depth from the shore). We therefore developed an innovative device mounted on board of an autogyre (in-between ultralight power driven aircraft and helicopter). The entire area has been surveyed with a focal of 70mm at 400m asl with a ground pixel of 3cm. MICMAC gives the possibility to directly georeference digital Model. Here, it has been performed by a net of wireless GPS called Geocubes, also developed at IGN. The second experience is a part of field measurements performed over the flanks of the volcano Piton de la Fournaise, La Réunion island. In order to characterize the roughness of different type of lava flows, extremely high resolution Digital Terrain Models (0.6mm) have been generated with MICMAC. The use of such high definition topography made the characterization possible through the calculation of the correlation length, the standard deviation and the fractal dimension. To conclude, we will sketch a synthesis of the need of geoscientists vs. the optimal resolution of digital topographic data.
NASA Astrophysics Data System (ADS)
Townsend, D. W.
1988-06-01
In 1982 the first prototype high density avalanche chamber (HIDAC) positron camera became operational in the Division of Nuclear Medicine of Geneva University Hospital. The camera consisted of dual 20 cm × 20 cm HIDAC detectors mounted on a rotating gantry. In 1984, these detectors were replaced by 30 cm × 30 cm detectors with improved performance and reliability. Since then, the larger detectors have undergone clinical evaluation. This article discusses certain aspects of the evaluation program and the conclusions that can be drawn from the results. The potential of the HIDAC camera for quantitative positron emission tomography (PET) is critically examined, and its performance compared with a state-of-the-art, commercial ring camera. Guidelines for the design of a future HIDAC camera are suggested.
NASA Astrophysics Data System (ADS)
Michaelis, Dirk; Schroeder, Andreas
2012-11-01
Tomographic PIV has triggered vivid activity, reflected in a large number of publications, covering both: development of the technique and a wide range of fluid dynamic experiments. Maturing of tomo PIV allows the application in medium to large scale wind tunnels. Limiting factor for wind tunnel application is the small size of the measurement volume, being typically about of 50 × 50 × 15 mm3. Aim of this study is the optimization towards large measurement volumes and high spatial resolution performing cylinder wake measurements in a 1 meter wind tunnel. Main limiting factors for the volume size are the laser power and the camera sensitivity. So, a high power laser with 800 mJ per pulse is used together with low noise sCMOS cameras, mounted in forward scattering direction to gain intensity due to the Mie scattering characteristics. A mirror is used to bounce the light back, to have all cameras in forward scattering. Achievable particle density is growing with number of cameras, so eight cameras are used for a high spatial resolution. Optimizations lead to volume size of 230 × 200 × 52 mm3 = 2392 cm3, more than 60 times larger than previously. 281 × 323 × 68 vectors are calculated with spacing of 0.76 mm. The achieved measurement volume size and spatial resolution is regarded as a major step forward in the application of tomo PIV in wind tunnels. Supported by EU-project: no. 265695.
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
NASA Astrophysics Data System (ADS)
Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel
2012-10-01
The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing
NASA Astrophysics Data System (ADS)
Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.
2018-01-01
Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-021 (7 Dec 1993) --- This close-up view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members have been working in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
ERIC Educational Resources Information Center
Lee, Victor R.
2015-01-01
Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video,…
Kioleoglou, Ioannis; Pissiotis, Argirios
2018-01-01
Background The purpose of this study was to evaluate the accuracy of fitting of an implant supported screw-retained bar made on definitive casts produced by 4 different dental stone products. Material and Methods The dental stones tested were QuickRock (Protechno), FujiRock (GC), Jade Stone (Whip Mix) and Moldasynt (Heraeus). Three external hexagon implants were placed in a polyoxymethylene block. Definitive impressions were made using monophase high viscosity polyvinylsiloxane in combination with custom trays. Then, definitive models from the different types of dental stones were fabricated. Three castable cylinders with a machined non-enganging base were cast and connected with a very small quantity of PMMA to a cast bar, which was used to verify the marginal discrepancies between the abutments and the prosthetic platforms of the implants. For that purpose special software and a camera mounted on an optical microscope were used. The gap was measured by taking 10 measurements on each abutment, after the Sheffield test was applied. Twelve definitive casts were fabricated for each gypsum product and 40 measurements were performed for each cast. Mean, minimum, and maximum values were calculated. The Shapiro-Wilk test of normality was performed. Mann-Whitney test (P<.06) was used for the statistical analysis of the measurements. Results The non-parametric Kruskal-Wallis test revealed a statistically significant effect of the stone factor on the marginal discrepancy for all Sheffield test combinations: 1. Abutment 2 when screw was fastened on abutment 1 (χ2=3, df=35.33, P<0.01), 2. Abutment 3 when the screw was fastened on abutment 1 (χ2=3, df=37.74, P<0.01), 3. Abutment 1 when the screw was fastened on abutment 3 (χ2=3, df=39.79, P<0.01), 4. Abutment 2 when the screw was fastened on abutment 3 (χ2=3, df=37.26, P<0.01). Conclusions A significant correlation exists between marginal discrepancy and different dental gypsum products used for the fabrication of definitive casts for implant supported bars. The smallest marginal discrepancy was noted on implant supported bars fabricated on definitive casts made by Type III mounting stone. The biggest marginal discrepancy was noted on implant supported bars fabricated on definitive casts made by Type V dental stone. The marginal discrepancies presented on implant supported bars fabricated on definitive casts made by two types of Type IV dental stone were not significantly different. Key words:Dental implant, passive fit, dental stones, marginal discrepancy. PMID:29721227
A compact high-speed pnCCD camera for optical and x-ray applications
NASA Astrophysics Data System (ADS)
Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo
2012-07-01
We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.
Optical registration of spaceborne low light remote sensing camera
NASA Astrophysics Data System (ADS)
Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long
2018-02-01
For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.
NASA Astrophysics Data System (ADS)
Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo
2015-04-01
Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.
High frequency modal identification on noisy high-speed camera data
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-01-01
Vibration measurements using optical full-field systems based on high-speed footage are typically heavily burdened by noise, as the displacement amplitudes of the vibrating structures are often very small (in the range of micrometers, depending on the structure). The modal information is troublesome to measure as the structure's response is close to, or below, the noise level of the camera-based measurement system. This paper demonstrates modal parameter identification for such noisy measurements. It is shown that by using the Least-Squares Complex-Frequency method combined with the Least-Squares Frequency-Domain method, identification at high-frequencies is still possible. By additionally incorporating a more precise sensor to identify the eigenvalues, a hybrid accelerometer/high-speed camera mode shape identification is possible even below the noise floor. An accelerometer measurement is used to identify the eigenvalues, while the camera measurement is used to produce the full-field mode shapes close to 10 kHz. The identified modal parameters improve the quality of the measured modal data and serve as a reduced model of the structure's dynamics.
NASA Astrophysics Data System (ADS)
Williams, B. P.; Kjellstrand, B.; Jones, G.; Reimuller, J. D.; Fritts, D. C.; Miller, A.; Geach, C.; Limon, M.; Hanany, S.; Kaifler, B.; Wang, L.; Taylor, M. J.
2017-12-01
PMC-Turbo is a NASA long-duration, high-altitude balloon mission that will deploy 7 high-resolution cameras to image polar mesospheric clouds (PMC) and measure gravity wave breakdown and turbulence. The mission has been enhanced by the addition of the DLR Balloon Lidar Experiment (BOLIDE) and an OH imager from Utah State University. This instrument suite will provide high horizontal and vertical resolution of the wave-modified PMC structure along a several thousand kilometer flight track. We have requested a flight from Kiruna, Sweden to Canada in June 2017 or McMurdo Base, Antarctica in Dec 2017. Three of the PMC camera systems were deployed on an aircraft and two tomographic ground sites for the High Level campaign in Canada in June/July 2017. On several nights the cameras observed PMC's with strong gravity wave breaking signatures. One PMC camera will piggyback on the Super Tiger mission scheduled to be launched in Dec 2017 from McMurdo, so we will obtain PMC images and wave/turbulence data from both the northern and southern hemispheres.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter
NASA Astrophysics Data System (ADS)
Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun
2018-04-01
Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.
Real-time vehicle matching for multi-camera tunnel surveillance
NASA Astrophysics Data System (ADS)
Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried
2011-03-01
Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.
Solid state replacement of rotating mirror cameras
NASA Astrophysics Data System (ADS)
Frank, Alan M.; Bartolick, Joseph M.
2007-01-01
Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
Real time moving scene holographic camera system
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1973-01-01
A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Solid-state framing camera with multiple time frames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, K. L.; Stewart, R. E.; Steele, P. T.
2013-10-07
A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.
ERIC Educational Resources Information Center
Reynolds, Ronald F.
1984-01-01
Describes the basic components of a space telescope that will be launched during a 1986 space shuttle mission. These components include a wide field/planetary camera, faint object spectroscope, high-resolution spectrograph, high-speed photometer, faint object camera, and fine guidance sensors. Data to be collected from these instruments are…
Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography
Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.
1972-01-01
Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.
Schmidt, Jürgen; Laarousi, Rihab; Stolzmann, Wolfgang; Karrer-Gauß, Katja
2018-06-01
In this article, we examine the performance of different eye blink detection algorithms under various constraints. The goal of the present study was to evaluate the performance of an electrooculogram- and camera-based blink detection process in both manually and conditionally automated driving phases. A further comparison between alert and drowsy drivers was performed in order to evaluate the impact of drowsiness on the performance of blink detection algorithms in both driving modes. Data snippets from 14 monotonous manually driven sessions (mean 2 h 46 min) and 16 monotonous conditionally automated driven sessions (mean 2 h 45 min) were used. In addition to comparing two data-sampling frequencies for the electrooculogram measures (50 vs. 25 Hz) and four different signal-processing algorithms for the camera videos, we compared the blink detection performance of 24 reference groups. The analysis of the videos was based on very detailed definitions of eyelid closure events. The correct detection rates for the alert and manual driving phases (maximum 94%) decreased significantly in the drowsy (minus 2% or more) and conditionally automated (minus 9% or more) phases. Blinking behavior is therefore significantly impacted by drowsiness as well as by automated driving, resulting in less accurate blink detection.
A wide-angle camera module for disposable endoscopy
NASA Astrophysics Data System (ADS)
Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee
2016-08-01
A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.
Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.
2016-01-01
ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791
The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.
Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco
2015-01-01
Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.
Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm
NASA Astrophysics Data System (ADS)
Lahamy, H.; Lichti, D.
2011-09-01
Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.
High-speed imaging system for observation of discharge phenomena
NASA Astrophysics Data System (ADS)
Tanabe, R.; Kusano, H.; Ito, Y.
2008-11-01
A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.
The opto-cryo-mechanical design of the short wavelength camera for the CCAT Observatory
NASA Astrophysics Data System (ADS)
Parshley, Stephen C.; Adams, Joseph; Nikola, Thomas; Stacey, Gordon J.
2014-07-01
The CCAT observatory is a 25-m class Gregorian telescope designed for submillimeter observations that will be deployed at Cerro Chajnantor (~5600 m) in the high Atacama Desert region of Chile. The Short Wavelength Camera (SWCam) for CCAT is an integral part of the observatory, enabling the study of star formation at high and low redshifts. SWCam will be a facility instrument, available at first light and operating in the telluric windows at wavelengths of 350, 450, and 850 μm. In order to trace the large curvature of the CCAT focal plane, and to suit the available instrument space, SWCam is divided into seven sub-cameras, each configured to a particular telluric window. A fully refractive optical design in each sub-camera will produce diffraction-limited images. The material of choice for the optical elements is silicon, due to its excellent transmission in the submillimeter and its high index of refraction, enabling thin lenses of a given power. The cryostat's vacuum windows double as the sub-cameras' field lenses and are ~30 cm in diameter. The other lenses are mounted at 4 K. The sub-cameras will share a single cryostat providing thermal intercepts at 80, 15, 4, 1 and 0.1 K, with cooling provided by pulse tube cryocoolers and a dilution refrigerator. The use of the intermediate temperature stage at 15 K minimizes the load at 4 K and reduces operating costs. We discuss our design requirements, specifications, key elements and expected performance of the optical, thermal and mechanical design for the short wavelength camera for CCAT.
High-Resolution Scintimammography: A Pilot Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rachel F. Brem; Joelle M. Schoonjans; Douglas A. Kieper
2002-07-01
This study evaluated a novel high-resolution breast-specific gamma camera (HRBGC) for the detection of suggestive breast lesions. Methods: Fifty patients (with 58 breast lesions) for whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a novel HRBGC prototype. The results of conventional and high-resolution nuclear studies were prospectively classified as negative (normal or benign) or positive (suggestive or malignant) by 2 radiologists who were unaware of the mammographic and histologic results. All of the included lesions were confirmed by pathology. Results: There were 30 benign and 28 malignant lesions. The sensitivity for detection ofmore » breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. The specificity with both systems was 93.3% (28/30). For the 18 nonpalpable lesions, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and the HRBGC, respectively. For lesions 1 cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four lesions (median size, 8.5 mm) were detected only with the HRBGC and were missed by the conventional camera. Conclusion: Evaluation of indeterminate breast lesions with an HRBGC results in improved sensitivity for the detection of cancer, with greater improvement shown for nonpalpable and 1-cm lesions.« less
Optical design of space cameras for automated rendezvous and docking systems
NASA Astrophysics Data System (ADS)
Zhu, X.
2018-05-01
Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.
NASA Astrophysics Data System (ADS)
Rowlette, Jeremy A.; Fotheringham, Edeline; Nichols, David; Weida, Miles J.; Kane, Justin; Priest, Allen; Arnone, David B.; Bird, Benjamin; Chapman, William B.; Caffey, David B.; Larson, Paul; Day, Timothy
2017-02-01
The field of infrared spectral imaging and microscopy is advancing rapidly due in large measure to the recent commercialization of the first high-throughput, high-spatial-definition quantum cascade laser (QCL) microscope. Having speed, resolution and noise performance advantages while also eliminating the need for cryogenic cooling, its introduction has established a clear path to translating the well-established diagnostic capability of infrared spectroscopy into clinical and pre-clinical histology, cytology and hematology workflows. Demand for even higher throughput while maintaining high-spectral fidelity and low-noise performance continues to drive innovation in QCL-based spectral imaging instrumentation. In this talk, we will present for the first time, recent technological advances in tunable QCL photonics which have led to an additional 10X enhancement in spectral image data collection speed while preserving the high spectral fidelity and SNR exhibited by the first generation of QCL microscopes. This new approach continues to leverage the benefits of uncooled microbolometer focal plane array cameras, which we find to be essential for ensuring both reproducibility of data across instruments and achieving the high-reliability needed in clinical applications. We will discuss the physics underlying these technological advancements as well as the new biomedical applications these advancements are enabling, including automated whole-slide infrared chemical imaging on clinically relevant timescales.
Field-based high-speed imaging of explosive eruptions
NASA Astrophysics Data System (ADS)
Taddeucci, J.; Scarlato, P.; Freda, C.; Moroni, M.
2012-12-01
Explosive eruptions involve, by definition, physical processes that are highly dynamic over short time scales. Capturing and parameterizing such processes is a major task in eruption understanding and forecasting, and a task that necessarily requires observational systems capable of high sampling rates. Seismic and acoustic networks are a prime tool for high-frequency observation of eruption, recently joined by Doppler radar and electric sensors. In comparison with the above monitoring systems, imaging techniques provide more complete and direct information of surface processes, but usually at a lower sampling rate. However, recent developments in high-speed imaging systems now allow such information to be obtained with a spatial and temporal resolution suitable for the analysis of several key eruption processes. Our most recent set up for high-speed imaging of explosive eruptions (FAMoUS - FAst, MUltiparametric Set-up,) includes: 1) a monochrome high speed camera, capable of 500 frames per second (fps) at high-definition (1280x1024 pixel) resolution and up to 200000 fps at reduced resolution; 2) a thermal camera capable of 50-200 fps at 480-120x640 pixel resolution; and 3) two acoustic to infrasonic sensors. All instruments are time-synchronized via a data logging system, a hand- or software-operated trigger, and via GPS, allowing signals from other instruments or networks to be directly recorded by the same logging unit or to be readily synchronized for comparison. FAMoUS weights less than 20 kg, easily fits into four, hand-luggage-sized backpacks, and can be deployed in less than 20' (and removed in less than 2', if needed). So far, explosive eruptions have been recorded in high-speed at several active volcanoes, including Fuego and Santiaguito (Guatemala), Stromboli (Italy), Yasur (Vanuatu), and Eyjafiallajokull (Iceland). Image processing and analysis from these eruptions helped illuminate several eruptive processes, including: 1) Pyroclasts ejection. High-speed videos reveal multiple, discrete ejection pulses within a single Strombolian explosion, with ejection velocities twice as high as previously recorded. Video-derived information on ejection velocity and ejecta mass can be combined with analytical and experimental models to constrain the physical parameters of the gas driving individual pulses. 2) Jet development. The ejection trajectory of pyroclasts can also be used to outline the spatial and temporal development of the eruptive jet and the dynamics of gas-pyroclast coupling within the jet, while high-speed thermal images add information on the temperature evolution in the jet itself as a function of the pyroclast size and content. 2) Pyroclasts settling. High-speed videos can be used to investigate the aerodynamic settling behavior of pyroclasts from bomb to ash in size and including ash aggregates, providing key parameters such as drag coefficient as a function of Re, and particle density. 3) The generation and propagation of acoustic and shock waves. Phase condensation in volcanic and atmospheric aerosol is triggered by the transit of pressure waves and can be recorded in high-speed videos, allowing the speed and wavelength of the waves to be measured and compared with the corresponding infrasonic signals and theoretical predictions.
FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †
Lee, Sukhan
2018-01-01
The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Low-cost digital dynamic visualization system
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
1995-05-01
High speed photographic systems like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording systems requiring time consuming and tedious wet processing of the films. Currently digital cameras are replacing to certain extent the conventional cameras for static experiments. Recently, there is lot of interest in developing and modifying CCD architectures and recording arrangements for dynamic scene analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration (TDI) mode for digitally recording dynamic scenes. Applications in solid as well as fluid impact problems are presented.
Low-cost camera modifications and methodologies for very-high-resolution digital images
USDA-ARS?s Scientific Manuscript database
Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...
Ghosh, Debashis; Michalopoulos, Nikolaos V; Davidson, Timothy; Wickham, Fred; Williams, Norman R; Keshtgar, Mohammed R
2017-04-01
Access to nuclear medicine department for sentinel node imaging remains an issue in number of hospitals in the UK and many parts of the world. Sentinella ® is a portable imaging camera used intra-operatively to produce real time visual localisation of sentinel lymph nodes. Sentinella ® was tested in a controlled laboratory environment at our centre and we report our experience on the first use of this technology from UK. Moreover, preoperative scintigrams of the axilla were obtained in 144 patients undergoing sentinel node biopsy using conventional gamma camera. Sentinella ® scans were done intra-operatively to correlate with the pre-operative scintigram and to determine presence of any residual hot node after the axilla was deemed to be clear based on the silence of the hand held gamma probe. Sentinella ® detected significantly more nodes compared with CGC (p < 0.0001). Sentinella ® picked up extra nodes in 5/144 cases after the axilla was found silent using hand held gamma probe. In 2/144 cases, extra nodes detected by Sentinella ® confirmed presence of tumour cells that led to a complete axillary clearance. Sentinella ® is a reliable technique for intra-operative localisation of radioactive nodes. It provides increased nodal visualisation rates compared to static scintigram imaging and proves to be an important tool for harvesting all hot sentinel nodes. This portable gamma camera can definitely replace the use of conventional lymphoscintigrams saving time and money both for patients and the health system. Copyright © 2016 Elsevier Ltd. All rights reserved.
High-speed and ultrahigh-speed cinematographic recording techniques
NASA Astrophysics Data System (ADS)
Miquel, J. C.
1980-12-01
A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented
Medium-sized aperture camera for Earth observation
NASA Astrophysics Data System (ADS)
Kim, Eugene D.; Choi, Young-Wan; Kang, Myung-Seok; Kim, Ee-Eul; Yang, Ho-Soon; Rasheed, Ad. Aziz Ad.; Arshad, Ahmad Sabirin
2017-11-01
Satrec Initiative and ATSB have been developing a medium-sized aperture camera (MAC) for an earth observation payload on a small satellite. Developed as a push-broom type high-resolution camera, the camera has one panchromatic and four multispectral channels. The panchromatic channel has 2.5m, and multispectral channels have 5m of ground sampling distances at a nominal altitude of 685km. The 300mm-aperture Cassegrain telescope contains two aspheric mirrors and two spherical correction lenses. With a philosophy of building a simple and cost-effective camera, the mirrors incorporate no light-weighting, and the linear CCDs are mounted on a single PCB with no beam splitters. MAC is the main payload of RazakSAT to be launched in 2005. RazakSAT is a 180kg satellite including MAC, designed to provide high-resolution imagery of 20km swath width on a near equatorial orbit (NEqO). The mission objective is to demonstrate the capability of a high-resolution remote sensing satellite system on a near equatorial orbit. This paper describes the overview of the MAC and RarakSAT programmes, and presents the current development status of MAC focusing on key optical aspects of Qualification Model.
Rogers, B.T. Jr.; Davis, W.C.
1957-12-17
This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.
NASA Technical Reports Server (NTRS)
Mollberg, Bernard H.; Schardt, Bruton B.
1988-01-01
The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.
Imagers for digital still photography
NASA Astrophysics Data System (ADS)
Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge
2006-04-01
This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
Applications of a shadow camera system for energy meteorology
NASA Astrophysics Data System (ADS)
Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert
2018-02-01
Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.
NASA Astrophysics Data System (ADS)
Gonzaga, S.; et al.
2011-03-01
ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Video model deformation system for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.
A real-time remote video streaming platform for ultrasound imaging.
Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel
2016-08-01
Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Global vision system in laparoscopy.
Rivas-Blanco, I; Sánchez-de-Badajoz, E; García-Morales, I; Lage-Sánchez, J M; Sánchez-Gallegos, P; Pérez-Del-Pulgar, C J; Muñoz, V F
2017-05-01
The main difficulty in laparoscopic or robot-assisted surgery is the narrow visual field, restricted by the endoscope's access port. This restriction is coupled with the difficulty of handling the instruments, which is due not only to the access port but also to the loss of depth of field and perspective due to the lack of natural lighting. In this article, we describe a global vision system and report on our initial experience in a porcine model. The global vision system consists of a series of intraabdominal devices, which increase the visual field and help recover perspective through the simulation of natural shadows. These devices are a series of high-definition cameras and LED lights, which are inserted and fixed to the wall using magnets. The system's efficacy was assessed in a varicocelectomy and nephrectomy. The various intraabdominal cameras offer a greater number of intuitive points of view of the surgical field compared with the conventional telescope and appear to provide a similar view as that in open surgery. Areas previously inaccessible to the standard telescope can now be reached. The additional light sources create shadows that increase the perspective of the surgical field. This system appears to increase the possibilities for laparoscopic or robot-assisted surgery because it offers an instant view of almost the entire abdomen, enabling more complex procedures, which currently require an open pathway. Copyright © 2016 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.
A continuous-flow capillary mixing method to monitor reactions on the microsecond time scale.
Shastry, M C; Luck, S D; Roder, H
1998-01-01
A continuous-flow capillary mixing apparatus, based on the original design of Regenfuss et al. (Regenfuss, P., R. M. Clegg, M. J. Fulwyler, F. J. Barrantes, and T. M. Jovin. 1985. Rev. Sci. Instrum. 56:283-290), has been developed with significant advances in mixer design, detection method and data analysis. To overcome the problems associated with the free-flowing jet used for observation in the original design (instability, optical artifacts due to scattering, poor definition of the geometry), the solution emerging from the capillary is injected directly into a flow-cell joined to the tip of the outer capillary via a ground-glass joint. The reaction kinetics are followed by measuring fluorescence versus distance downstream from the mixer, using an Hg(Xe) arc lamp for excitation and a digital camera with a UV-sensitized CCD detector for detection. Test reactions involving fluorescent dyes indicate that mixing is completed within 15 micros of its initiation and that the dead time of the measurement is 45 +/- 5 micros, which represents a >30-fold improvement in time resolution over conventional stopped-flow instruments. The high sensitivity and linearity of the CCD camera have been instrumental in obtaining artifact-free kinetic data over the time window from approximately 45 micros to a few milliseconds with signal-to-noise levels comparable to those of conventional methods. The scope of the method is discussed and illustrated with an example of a protein folding reaction. PMID:9591695
Endoscopic endonasal trans-sphenoid surgery of pituitary adenoma
Yadav, YR; Sachdev, S; Parihar, V; Namdev, H; Bhatele, PR
2012-01-01
Endoscopic endonasal trans-sphenoid surgery (EETS) is increasingly used for pituitary lesions. Pre-operative CT and MRI scans and peroperative endoscopic visualization can provide useful anatomical information. EETS is indicated in sellar, suprasellar, intraventricular, retro-infundibular, and invasive tumors. Recurrent and residual lesions, pituitary apoplexy and empty sella syndrome can be managed by EETS. Modern neuronavigation techniques, ultrasonic aspirators, ultrasonic bone curette can add to the safety. The binostril approach provides a wider working area. High definition camera is much superior to three-chip camera. Most of the recent reports favor EETS in terms of safety, quality of life and tumor resection, hospital stay, better endocrinological, and visual outcome as compared to the microscopic technique. Nasal symptoms, blood loss, operating time are less in EETS. Various naso-septal flaps and other techniques of CSF leak repair could help reduce complications. Complications can be further reduced after achieving the learning curve, good understanding of limitations with proper patient selection. Use of neuronavigation, proper post-operative care of endocrine function, establishing pituitary center of excellence and more focused residency and endoscopic fellowship training could improve results. The faster and safe transition from microscopic to EETS can be done by the team concept of neurosurgeon/otolaryngologist, attending hands on cadaveric dissection, practice on models, and observation of live surgeries. Conversion to a microscopic or endoscopic-assisted approach may be required in selected patients. Multi-modality treatment could be required in giant and invasive tumors. EETS appears to be a better surgical option in most pituitary adenoma. PMID:23188987
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
Microprocessor-controlled wide-range streak camera
NASA Astrophysics Data System (ADS)
Lewis, Amy E.; Hollabaugh, Craig
2006-08-01
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
PBF Reactor Building (PER620). Camera facing south end of high ...
PBF Reactor Building (PER-620). Camera facing south end of high bay. Vertical-lift door is being installed. Later, pneumatic seals will be installed around door. Photographer: Kirsh. Date: September 31, 1968. INEEL negative no. 68-3176 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
High speed multiwire photon camera
NASA Technical Reports Server (NTRS)
Lacy, Jeffrey L. (Inventor)
1991-01-01
An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occuring during the readout window.
High speed multiwire photon camera
NASA Technical Reports Server (NTRS)
Lacy, Jeffrey L. (Inventor)
1989-01-01
An improved multiwire proportional counter camera having particular utility in the field of clinical nuclear medicine imaging. The detector utilizes direct coupled, low impedance, high speed delay lines, the segments of which are capacitor-inductor networks. A pile-up rejection test is provided to reject confused events otherwise caused by multiple ionization events occurring during the readout window.
High-Speed Videography Instrumentation And Procedures
NASA Astrophysics Data System (ADS)
Miller, C. E.
1982-02-01
High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
Towards next generation 3D cameras
NASA Astrophysics Data System (ADS)
Gupta, Mohit
2017-03-01
We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.
Extended spectrum SWIR camera with user-accessible Dewar
NASA Astrophysics Data System (ADS)
Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva
2017-02-01
Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.
Measuring and Estimating Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2013-01-01
Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.
Wearn, Oliver R.; Rowcliffe, J. Marcus; Carbone, Chris; Bernard, Henry; Ewers, Robert M.
2013-01-01
The proliferation of camera-trapping studies has led to a spate of extensions in the known distributions of many wild cat species, not least in Borneo. However, we still do not have a clear picture of the spatial patterns of felid abundance in Southeast Asia, particularly with respect to the large areas of highly-disturbed habitat. An important obstacle to increasing the usefulness of camera trap data is the widespread practice of setting cameras at non-random locations. Non-random deployment interacts with non-random space-use by animals, causing biases in our inferences about relative abundance from detection frequencies alone. This may be a particular problem if surveys do not adequately sample the full range of habitat features present in a study region. Using camera-trapping records and incidental sightings from the Kalabakan Forest Reserve, Sabah, Malaysian Borneo, we aimed to assess the relative abundance of felid species in highly-disturbed forest, as well as investigate felid space-use and the potential for biases resulting from non-random sampling. Although the area has been intensively logged over three decades, it was found to still retain the full complement of Bornean felids, including the bay cat Pardofelis badia, a poorly known Bornean endemic. Camera-trapping using strictly random locations detected four of the five Bornean felid species and revealed inter- and intra-specific differences in space-use. We compare our results with an extensive dataset of >1,200 felid records from previous camera-trapping studies and show that the relative abundance of the bay cat, in particular, may have previously been underestimated due to the use of non-random survey locations. Further surveys for this species using random locations will be crucial in determining its conservation status. We advocate the more wide-spread use of random survey locations in future camera-trapping surveys in order to increase the robustness and generality of inferences that can be made. PMID:24223717
Analysis of a severe head injury in World Cup alpine skiing.
Yamazaki, Junya; Gilgien, Matthias; Kleiven, Svein; McIntosh, Andrew S; Nachbauer, Werner; Müller, Erich; Bere, Tone; Bahr, Roald; Krosshaug, Tron
2015-06-01
Traumatic brain injury (TBI) is the leading cause of death in alpine skiing. It has been found that helmet use can reduce the incidence of head injuries between 15% and 60%. However, knowledge on optimal helmet performance criteria in World Cup alpine skiing is currently limited owing to the lack of biomechanical data from real crash situations. This study aimed to estimate impact velocities in a severe TBI case in World Cup alpine skiing. Video sequences from a TBI case in World Cup alpine skiing were analyzed using a model-based image matching technique. Video sequences from four camera views were obtained in full high-definition (1080p) format. A three-dimensional model of the course was built based on accurate measurements of piste landmarks and matched to the background video footage using the animation software Poser 4. A trunk-neck-head model was used for tracking the skier's trajectory. Immediately before head impact, the downward velocity component was estimated to be 8 m·s⁻¹. After impact, the upward velocity was 3 m·s⁻¹, whereas the velocity parallel to the slope surface was reduced from 33 m·s⁻¹ to 22 m·s⁻¹. The frontal plane angular velocity of the head changed from 80 rad·s⁻¹ left tilt immediately before impact to 20 rad·s⁻¹ right tilt immediately after impact. A unique combination of high-definition video footage and accurate measurements of landmarks in the slope made possible a high-quality analysis of head impact velocity in a severe TBI case. The estimates can provide crucial information on how to prevent TBI through helmet performance criteria and design.
Pelletier, Dominique; Leleu, Kévin; Mallet, Delphine; Mou-Tham, Gérard; Hervé, Gilles; Boureau, Matthieu; Guilpart, Nicolas
2012-01-01
Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC) conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.
SU-E-T-161: SOBP Beam Analysis Using Light Output of Scintillation Plate Acquired by CCD Camera.
Cho, S; Lee, S; Shin, J; Min, B; Chung, K; Shin, D; Lim, Y; Park, S
2012-06-01
To analyze Bragg-peak beams in SOBP (spread-out Bragg-peak) beam using CCD (charge-coupled device) camera - scintillation screen system. We separated each Bragg-peak beam using light output of high sensitivity scintillation material acquired by CCD camera and compared with Bragg-peak beams calculated by Monte Carlo simulation. In this study, CCD camera - scintillation screen system was constructed with a high sensitivity scintillation plate (Gd2O2S:Tb) and a right-angled prismatic PMMA phantom, and a Marlin F-201B, EEE-1394 CCD camera. SOBP beam irradiated by the double scattering mode of a PROTEUS 235 proton therapy machine in NCC is 8 cm width, 13 g/cm 2 range. The gain, dose rate and current of this beam is 50, 2 Gy/min and 70 nA, respectively. Also, we simulated the light output of scintillation plate for SOBP beam using Geant4 toolkit. We evaluated the light output of high sensitivity scintillation plate according to intergration time (0.1 - 1.0 sec). The images of CCD camera during the shortest intergration time (0.1 sec) were acquired automatically and randomly, respectively. Bragg-peak beams in SOBP beam were analyzed by the acquired images. Then, the SOBP beam used in this study was calculated by Geant4 toolkit and Bragg-peak beams in SOBP beam were obtained by ROOT program. The SOBP beam consists of 13 Bragg-peak beams. The results of experiment were compared with that of simulation. We analyzed Bragg-peak beams in SOBP beam using light output of scintillation plate acquired by CCD camera and compared with that of Geant4 simulation. We are going to study SOBP beam analysis using more effective the image acquisition technique. © 2012 American Association of Physicists in Medicine.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Prototypic Development and Evaluation of a Medium Format Metric Camera
NASA Astrophysics Data System (ADS)
Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.
2018-05-01
Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.
Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989
NASA Astrophysics Data System (ADS)
Csorba, Illes P.
Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.
Sub-Camera Calibration of a Penta-Camera
NASA Astrophysics Data System (ADS)
Jacobsen, K.; Gerke, M.
2016-03-01
Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.
The upgrade of the H.E.S.S. cameras
NASA Astrophysics Data System (ADS)
Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; Naurois, Mathieu de; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois
2017-12-01
The High Energy Stereoscopic System (HESS) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in the Khomas highland in Namibia. It was built to detect Very High Energy (VHE > 100 GeV) cosmic gamma rays. Since 2003, HESS has discovered the majority of the known astrophysical VHE gamma-ray sources, opening a new observational window on the extreme non-thermal processes at work in our universe. HESS consists of four 12-m diameter Cherenkov telescopes (CT1-4), which started data taking in 2002, and a larger 28-m telescope (CT5), built in 2012, which lowers the energy threshold of the array to 30 GeV . The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Only the PMTs and their HV supplies have been kept from the original cameras. Novel technical solutions have been introduced, which will find their way into some of the Cherenkov cameras foreseen for the next-generation Cherenkov Telescope Array (CTA) observatory. In particular, the camera readout system is the first large-scale system based on the analog memory chip NECTAr, which was designed for CTA cameras. The camera control subsystems and the control software framework also pursue an innovative design, exploiting cutting-edge hardware and software solutions which excel in performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 have been upgraded in fall 2016. Together they will assure continuous operation of HESS at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded HESS camera.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2011-01-01
A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
Continuous All-Sky Cloud Measurements: Cloud Fraction Analysis Based on a Newly Developed Instrument
NASA Astrophysics Data System (ADS)
Aebi, C.; Groebner, J.; Kaempfer, N.; Vuilleumier, L.
2017-12-01
Clouds play an important role in the climate system and are also a crucial parameter for the Earth's surface energy budget. Ground-based measurements of clouds provide data in a high temporal resolution in order to quantify its influence on radiation. The newly developed all-sky cloud camera at PMOD/WRC in Davos (Switzerland), the infrared cloud camera (IRCCAM), is a microbolometer sensitive in the 8 - 14 μm wavelength range. To get all-sky information the camera is located on top of a frame looking downward on a spherical gold-plated mirror. The IRCCAM has been measuring continuously (day and nighttime) with a time resolution of one minute in Davos since September 2015. To assess the performance of the IRCCAM, two different visible all-sky cameras (Mobotix Q24M and Schreder VIS-J1006), which can only operate during daytime, are installed in Davos. All three camera systems have different software for calculating fractional cloud coverage from images. Our study analyzes mainly the fractional cloud coverage of the IRCCAM and compares it with the fractional cloud coverage calculated from the two visible cameras. Preliminary results of the measurement accuracy of the IRCCAM compared to the visible camera indicate that 78 % of the data are within ± 1 octa and even 93 % within ± 2 octas. An uncertainty of 1-2 octas corresponds to the measurement uncertainty of human observers. Therefore, the IRCCAM shows similar performance in detection of cloud coverage as the visible cameras and the human observers, with the advantage that continuous measurements with high temporal resolution are possible.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
Uncooled radiometric camera performance
NASA Astrophysics Data System (ADS)
Meyer, Bill; Hoelter, T.
1998-07-01
Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-009 (4 Dec 1993) --- This view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC). The scene was down linked to ground controllers soon after the Space Shuttle Endeavour caught up to the orbiting telescope 320 miles above Earth. Shown here before grapple, the HST was captured on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven STS-61 crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Modulated CMOS camera for fluorescence lifetime microscopy.
Chen, Hongtao; Holst, Gerhard; Gratton, Enrico
2015-12-01
Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.
Suicide by pedestrian versus motor vehicle: a case report.
Rudy, Bruce S
2012-09-01
Suicide is the deliberate act of ending one's own life. Historically, men commit suicide more frequently than do women; however, rates have increased for women worldwide in recent years. Transportation injuries have been widely reported as means of suicide such as operators of motor vehicles or jumpers into the pathway of trains. Few definitive reports exist of pedestrians deliberately jumping into the pathway of a motor vehicle. A security camera demonstrates a pedestrian deliberately entering the pathway of a moving vehicle resulting in death from multiple blunt force trauma.
Full-Frame Reference for Test Photo of Moon
NASA Technical Reports Server (NTRS)
2005-01-01
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images. Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across. The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
NASA Technical Reports Server (NTRS)
Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)
2014-01-01
A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.
Printed circuit board for a CCD camera head
Conder, Alan D.
2002-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
High resolution bone mineral densitometry with a gamma camera
NASA Technical Reports Server (NTRS)
Leblanc, A.; Evans, H.; Jhingran, S.; Johnson, P.
1983-01-01
A technique by which the regional distribution of bone mineral can be determined in bone samples from small animals is described. The technique employs an Anger camera interfaced to a medical computer. High resolution imaging is possible by producing magnified images of the bone samples. Regional densitometry of femurs from oophorectomised and bone mineral loss.
Using a High-Speed Camera to Measure the Speed of Sound
ERIC Educational Resources Information Center
Hack, William Nathan; Baird, William H.
2012-01-01
The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…
The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop
ERIC Educational Resources Information Center
Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye
2012-01-01
We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…
Positron emission particle tracking using a modular positron camera
NASA Astrophysics Data System (ADS)
Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.
2009-06-01
The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Shayduk, M.
2017-01-01
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
CMOS Camera Array With Onboard Memory
NASA Technical Reports Server (NTRS)
Gat, Nahum
2009-01-01
A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.
Solid State Television Camera (CID)
NASA Technical Reports Server (NTRS)
Steele, D. W.; Green, W. T.
1976-01-01
The design, development and test are described of a charge injection device (CID) camera using a 244x248 element array. A number of video signal processing functions are included which maximize the output video dynamic range while retaining the inherently good resolution response of the CID. Some of the unique features of the camera are: low light level performance, high S/N ratio, antiblooming, geometric distortion, sequential scanning and AGC.
Photogrammetry of Apollo 15 photography, part C
NASA Technical Reports Server (NTRS)
Wu, S. S. C.; Schafer, F. J.; Jordan, R.; Nakata, G. M.; Derick, J. L.
1972-01-01
In the Apollo 15 mission, a mapping camera system and a 61 cm optical bar, high resolution panoramic camera, as well as a laser altimeter were used. The panoramic camera is described, having several distortion sources, such as cylindrical shape of the negative film surface, the scanning action of the lens, the image motion compensator, and the spacecraft motion. Film products were processed on a specifically designed analytical plotter.
Return Beam Vidicon (RBV) panchromatic two-camera subsystem for LANDSAT-C
NASA Technical Reports Server (NTRS)
1977-01-01
A two-inch Return Beam Vidicon (RBV) panchromatic two camera Subsystem, together with spare components was designed and fabricated for the LANDSAT-C Satellite; the basis for the design was the Landsat 1&2 RBV Camera System. The purpose of the RBV Subsystem is to acquire high resolution pictures of the Earth for a mapping application. Where possible, residual LANDSAT 1 and 2 equipment was utilized.
3D bubble reconstruction using multiple cameras and space carving method
NASA Astrophysics Data System (ADS)
Fu, Yucheng; Liu, Yang
2018-07-01
An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm × 1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.
Stratified charge rotary engine - Internal flow studies at the MSU engine research laboratory
NASA Technical Reports Server (NTRS)
Hamady, F.; Kosterman, J.; Chouinard, E.; Somerton, C.; Schock, H.; Chun, K.; Hicks, Y.
1989-01-01
High-speed visualization and laser Doppler velocimetry (LDV) systems consisting of a 40-watt copper vapor laser, mirrors, cylindrical lenses, a high speed camera, a synchronization timing system, and a particle generator were developed for the study of the fuel spray-air mixing flow characteristics within the combustion chamber of a motored rotary engine. The laser beam is focused down to a sheet approximately 1 mm thick, passing through the combustion chamber and illuminates smoke particles entrained in the intake air. The light scattered off the particles is recorded by a high speed rotating prism camera. Movies are made showing the air flow within the combustion chamber. The results of a movie showing the development of a high-speed (100 Hz) high-pressure (68.94 MPa, 10,000 psi) fuel jet are also discussed. The visualization system is synchronized so that a pulse generated by the camera triggers the laser's thyratron.
Using a high-definition stereoscopic video system to teach microscopic surgery
NASA Astrophysics Data System (ADS)
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
2007-02-01
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition stereoscopy bears the potential to compress the learning curve for undergraduate as well as postgraduate medical professionals in minimally invasive surgery. Further studies will focus on the long term effect for operative training as well as on post-processing of HD stereoscopy video content for off-line interactive medical education.
NASA Astrophysics Data System (ADS)
Rossi, Marco; Pierron, Fabrice; Forquin, Pascal
2014-02-01
Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.
Yang, Hualei; Yang, Xi; Heskel, Mary; Sun, Shucun; Tang, Jianwu
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporal resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). We found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.
Koulikov, Victoria; Lerman, Hedva; Kesler, Mikhail; Even-Sapir, Einat
2015-12-01
Cadmium zinc telluride (CZT) solid-state detectors have been recently introduced in the field of nuclear medicine in cardiology and breast imaging. The aim of the current study was to evaluate the performance of the novel detectors (CZT) compared to that of the routine NaI(Tl) in bone scintigraphy. A dual-headed CZT-based camera dedicated originally to breast imaging has been used, and in view of the limited size of the detectors, the hands were chosen as the organ for assessment. This is a clinical study. Fifty-eight consecutive patients (total 116 hands) referred for bone scan for suspected hand pathology gave their informed consent to have two acquisitions, using the routine camera and the CZT-based camera. The latter was divided into full-dose full-acquisition time (FD CZT) and reduced-dose short-acquisition time (RD CZT) on CZT technology, so three image sets were available for analysis. Data analysis included comparing the detection of hot lesions and identification of the metacarpophalangeal, proximal interphalangeal, and distal interphalangeal joints. A total of 69 hot lesions were detected on the CZT image sets; of these, 61 were identified as focal sites of uptake on NaI(Tl) data. On FD CZT data, 385 joints were identified compared to 168 on NaI(Tl) data (p < 0.001). There was no statistically significant difference in delineation of joints between FD and RD CZT data as the latter identified 383 joints. Bone scintigraphy using a CZT-based gamma camera is associated with improved lesion detection and anatomic definition. The superior physical characteristics of this technique raised a potential reduction in administered dose and/or acquisition time without compromising image quality.
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.
2016-03-01
The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.
NASA Astrophysics Data System (ADS)
Close, Laird M.; Males, Jared R.; Kopon, Derek A.; Gasho, Victor; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Uomoto, Alan; Hare, Tyson; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Busoni, Lorenzo; Arcidiacono, Carmelo; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando; Argomedo, Javier
2012-07-01
The heart of the 6.5 Magellan AO system (MagAO) is a 585 actuator adaptive secondary mirror (ASM) with <1 msec response times (0.7 ms typically). This adaptive secondary will allow low emissivity and high-contrast AO science. We fabricated a high order (561 mode) pyramid wavefront sensor (similar to that now successfully used at the Large Binocular Telescope). The relatively high actuator count (and small projected ~23 cm pitch) allows moderate Strehls to be obtained by MagAO in the “visible” (0.63-1.05 μm). To take advantage of this we have fabricated an AO CCD science camera called "VisAO". Complete “end-to-end” closed-loop lab tests of MagAO achieve a solid, broad-band, 37% Strehl (122 nm rms) at 0.76 μm (i’) with the VisAO camera in 0.8” simulated seeing (13 cm ro at V) with fast 33 mph winds and a 40 m Lo locked on R=8 mag artificial star. These relatively high visible wavelength Strehls are enabled by our powerful combination of a next generation ASM and a Pyramid WFS with 400 controlled modes and 1000 Hz sample speeds (similar to that used successfully on-sky at the LBT). Currently only the VisAO science camera is used for lab testing of MagAO, but this high level of measured performance (122 nm rms) promises even higher Strehls with our IR science cameras. On bright (R=8 mag) stars we should achieve very high Strehls (>70% at H) in the IR with the existing MagAO Clio2 (λ=1-5.3 μm) science camera/coronagraph or even higher (~98% Strehl) the Mid-IR (8-26 microns) with the existing BLINC/MIRAC4 science camera in the future. To eliminate non-common path vibrations, dispersions, and optical errors the VisAO science camera is fed by a common path advanced triplet ADC and is piggy-backed on the Pyramid WFS optical board itself. Also a high-speed shutter can be used to block periods of poor correction. The entire system passed CDR in June 2009, and we finished the closed-loop system level testing phase in December 2011. Final system acceptance (“pre-ship” review) was passed in February 2012. In May 2012 the entire AO system is was successfully shipped to Chile and fully tested/aligned. It is now in storage in the Magellan telescope clean room in anticipation of “First Light” scheduled for December 2012. An overview of the design, attributes, performance, and schedule for the Magellan AO system and its two science cameras are briefly presented here.
Miniature Wide-Angle Lens for Small-Pixel Electronic Camera
NASA Technical Reports Server (NTRS)
Mouroulils, Pantazis; Blazejewski, Edward
2009-01-01
A proposed wideangle lens is shown that would be especially well suited for an electronic camera in which the focal plane is occupied by an image sensor that has small pixels. The design of the lens is intended to satisfy requirements for compactness, high image quality, and reasonably low cost, while addressing issues peculiar to the operation of small-pixel image sensors. Hence, this design is expected to enable the development of a new generation of compact, high-performance electronic cameras. The lens example shown has a 60 degree field of view and a relative aperture (f-number) of 3.2. The main issues affecting the design are also shown.
C-RED one: ultra-high speed wavefront sensing in the infrared made possible
NASA Astrophysics Data System (ADS)
Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian
2016-07-01
First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.
NASA Technical Reports Server (NTRS)
Lane, Marc; Hsieh, Cheng; Adams, Lloyd
1989-01-01
In undertaking the design of a 2000-mm focal length camera for the Mariner Mark II series of spacecraft, JPL sought novel materials with the requisite dimensional and thermal stability, outgassing and corrosion resistance, low mass, high stiffness, and moderate cost. Metal-matrix composites and Al-Li alloys have, in addition to excellent mechanical properties and low density, a suitably low coefficient of thermal expansion, high specific stiffness, and good electrical conductivity. The greatest single obstacle to application of these materials to camera structure design is noted to have been the lack of information regarding long-term dimensional stability.
New information technology tools for a medical command system for mass decontamination.
Fuse, Akira; Okumura, Tetsu; Hagiwara, Jun; Tanabe, Tomohide; Fukuda, Reo; Masuno, Tomohiko; Mimura, Seiji; Yamamoto, Kaname; Yokota, Hiroyuki
2013-06-01
In a mass decontamination during a nuclear, biological, or chemical (NBC) response, the capability to command, control, and communicate is crucial for the proper flow of casualties at the scene and their subsequent evacuation to definitive medical facilities. Information Technology (IT) tools can be used to strengthen medical control, command, and communication during such a response. Novel IT tools comprise a vehicle-based, remote video camera and communication network systems. During an on-site verification event, an image from a remote video camera system attached to the personal protective garment of a medical responder working in the warm zone was transmitted to the on-site Medical Commander for aid in decision making. Similarly, a communication network system was used for personnel at the following points: (1) the on-site Medical Headquarters; (2) the decontamination hot zone; (3) an on-site coordination office; and (4) a remote medical headquarters of a local government office. A specially equipped, dedicated vehicle was used for the on-site medical headquarters, and facilitated the coordination with other agencies. The use of these IT tools proved effective in assisting with the medical command and control of medical resources and patient transport decisions during a mass-decontamination exercise, but improvements are required to overcome transmission delays and camera direction settings, as well as network limitations in certain areas.
Camera Ready to Install on Mars Reconnaissance Orbiter
2005-01-07
A telescopic camera called the High Resolution Imaging Science Experiment, or HiRISE, right was installed onto the main structure of NASA Mars Reconnaissance Orbiter left on Dec. 11, 2004 at Lockheed Martin Space Systems, Denver.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy
NASA Astrophysics Data System (ADS)
Hewat, A. W.
We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.
Mach-zehnder based optical marker/comb generator for streak camera calibration
Miller, Edward Kirk
2015-03-03
This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.
A Normal Incidence X-ray Telescope (NIXT) sounding rocket payload
NASA Technical Reports Server (NTRS)
Golub, Leon
1989-01-01
Work on the High Resolution X-ray (HRX) Detector Program is described. In the laboratory and flight programs, multiple copies of a general purpose set of electronics which control the camera, signal processing and data acquisition, were constructed. A typical system consists of a phosphor convertor, image intensifier, a fiber optics coupler, a charge coupled device (CCD) readout, and a set of camera, signal processing and memory electronics. An initial rocket detector prototype camera was tested in flight and performed perfectly. An advanced prototype detector system was incorporated on another rocket flight, in which a high resolution heterojunction vidicon tube was used as the readout device for the H(alpha) telescope. The camera electronics for this tube were built in-house and included in the flight electronics. Performance of this detector system was 100 percent satisfactory. The laboratory X-ray system for operation on the ground is also described.
NASA Astrophysics Data System (ADS)
Brauchle, Joerg; Berger, Ralf; Hein, Daniel; Bucher, Tilman
2017-04-01
The DLR Institute of Optical Sensor Systems has developed the MACS-Himalaya, a custom built Modular Aerial Camera System specifically designed for the extreme geometric (steep slopes) and radiometric (high contrast) conditions of high mountain areas. It has an overall field of view of 116° across-track consisting of a nadir and two oblique looking RGB camera heads and a fourth nadir looking near-infrared camera. This design provides the capability to fly along narrow valleys and simultaneously cover ground and steep valley flank topography with similar ground resolution. To compensate for extreme contrasts between fresh snow and dark shadows in high altitudes a High Dynamic Range (HDR) mode was implemented, which typically takes a sequence of 3 images with graded integration times, each covering 12 bit radiometric depth, resulting in a total dynamic range of 15-16 bit. This enables dense image matching and interpretation for sunlit snow and glaciers as well as for dark shaded rock faces in the same scene. Small and lightweight industrial grade camera heads are used and operated at a rate of 3.3 frames per second with 3-step HDR, which is sufficient to achieve a longitudinal overlap of approximately 90% per exposure time at 1,000 m above ground at a velocity of 180 km/h. Direct georeferencing and multitemporal monitoring without the need of ground control points is possible due to the use of a high end GPS/INS system, a stable calibrated inner geometry of the camera heads and a fully photogrammetric workflow at DLR. In 2014 a survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried in a wingpod by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at altitudes up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced in regions and outcrops normally inaccessible to aerial imagery. These data are used in the fields of natural hazards, geomorphology and glaciology (see Thompson et al., CR4.3). In the presentation the camera system is introduced and examples and applications from the Nepal campaign are given.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
An autonomous sensor module based on a legacy CCTV camera
NASA Astrophysics Data System (ADS)
Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.
2016-10-01
A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.
Microprocessor-controlled, wide-range streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amy E. Lewis, Craig Hollabaugh
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less
NASA Astrophysics Data System (ADS)
Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.
2015-12-01
Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Suzuki, Mayumi; Kato, Katsuhiko; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Ogata, Yoshimune; Hatazawa, Jun
2016-09-01
Although iodine 131 (I-131) is used for radionuclide therapy, high resolution images are difficult to obtain with conventional gamma cameras because of the high energy of I-131 gamma photons (364 keV). Cerenkov-light imaging is a possible method for beta emitting radionuclides, and I-131 (606 MeV maximum beta energy) is a candidate to obtain high resolution images. We developed a high energy gamma camera system for I-131 radionuclide and combined it with a Cerenkov-light imaging system to form a gamma-photon/Cerenkov-light hybrid imaging system to compare the simultaneously measured images of these two modalities. The high energy gamma imaging detector used 0.85-mm×0.85-mm×10-mm thick GAGG scintillator pixels arranged in a 44×44 matrix with a 0.1-mm thick reflector and optical coupled to a Hamamatsu 2 in. square position sensitive photomultiplier tube (PSPMT: H12700 MOD). The gamma imaging detector was encased in a 2 cm thick tungsten shield, and a pinhole collimator was mounted on its top to form a gamma camera system. The Cerenkov-light imaging system was made of a high sensitivity cooled CCD camera. The Cerenkov-light imaging system was combined with the gamma camera using optical mirrors to image the same area of the subject. With this configuration, we simultaneously imaged the gamma photons and the Cerenkov-light from I-131 in the subjects. The spatial resolution and sensitivity of the gamma camera system for I-131 were respectively 3 mm FWHM and 10 cps/MBq for the high sensitivity collimator at 10 cm from the collimator surface. The spatial resolution of the Cerenkov-light imaging system was 0.64 mm FWHM at 10 cm from the system surface. Thyroid phantom and rat images were successfully obtained with the developed gamma-photon/Cerenkov-light hybrid imaging system, allowing direct comparison of these two modalities. Our developed gamma-photon/Cerenkov-light hybrid imaging system will be useful to evaluate the advantages and disadvantages of these two modalities.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
High-speed optical 3D sensing and its applications
NASA Astrophysics Data System (ADS)
Watanabe, Yoshihiro
2016-12-01
This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Exploring the Universe with the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
1990-01-01
A general overview is given of the operations, engineering challenges, and components of the Hubble Space Telescope. Deployment, checkout and servicing in space are discussed. The optical telescope assembly, focal plane scientific instruments, wide field/planetary camera, faint object spectrograph, faint object camera, Goddard high resolution spectrograph, high speed photometer, fine guidance sensors, second generation technology, and support systems and services are reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
NASA Astrophysics Data System (ADS)
Sun, Q. M.; Melnikov, A.; Mandelis, A.
2015-06-01
Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.
The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector
Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...
2014-06-11
We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
Helmet-mounted uncooled FPA camera for use in firefighting applications
NASA Astrophysics Data System (ADS)
Wu, Cheng; Feng, Shengrong; Li, Kai; Pan, Shunchen; Su, Junhong; Jin, Weiqi
2000-05-01
From the concept and need background of firefighters to the thermal imager, we discuss how the helmet-mounted camera applied in the bad environment of conflagration, especially at the high temperature, and how the better matching between the thermal imager with the helmet will be put into effect in weight, size, etc. Finally, give a practical helmet- mounted IR camera based on the uncooled focal plane array detector for in firefighting.
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes
Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter
2013-01-01
Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.
Compton camera study for high efficiency SPECT and benchmark with Anger system
NASA Astrophysics Data System (ADS)
Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.
2017-12-01
Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application for SPECT if compared to present commercial Anger systems, with particular focus on dose delivered to the patient, examination time, and spatial uncertainties.
SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darne, C; Robertson, D; Alsanea, F
2016-06-15
Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less
Atmospheric Science Data Center
2014-05-15
... camera. Such a display causes water bodies and inundated soil to appear in blue and purple hues, and highly vegetated areas to appear ... MISR's oblique cameras, indicating the presence of inundated soil throughout the floodplain. Note that clouds appear in a different spot for ...
Smile, Vandals--You're on Candid Camera.
ERIC Educational Resources Information Center
Lebowitz, Michelle
1997-01-01
Describes the Huntsville, Alabama, school district's use of surveillance cameras and other high-tech equipment to ward off arson, theft, and vandalism. Discusses how these efforts reduced repair and replacement costs and helped the district retain its insurance coverage. (GR)
Rapid orthophoto development system.
DOT National Transportation Integrated Search
2013-06-01
The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...