Simulations of Convection Zone Flows and Measurements from Multiple Viewing Angles
NASA Technical Reports Server (NTRS)
Duvall, Thomas L.; Hanasoge, Shravan
2011-01-01
A deep-focusing time-distance measurement technique has been applied to linear acoustic simulations of a solar interior perturbed by convective flows. The simulations are for the full sphere for r/R greater than 0.2. From these it is straightforward to simulate the observations from different viewing angles and to test how multiple viewing angles enhance detectibility. Some initial results will be presented.
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
Okamura, Jun-ya; Yamaguchi, Reona; Honda, Kazunari; Tanaka, Keiji
2014-01-01
One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. PMID:25378169
Modeling contact angle hysteresis of a liquid droplet sitting on a cosine wave-like pattern surface.
Promraksa, Arwut; Chen, Li-Jen
2012-10-15
A liquid droplet sitting on a hydrophobic surface with a cosine wave-like square-array pattern in the Wenzel state is simulated by using the Surface Evolver to determine the contact angle. For a fixed drop volume, multiple metastable states are obtained at two different surface roughnesses. Unusual and non-circular shape of the three-phase contact line of a liquid droplet sitting on the model surface is observed due to corrugation and distortion of the contact line by structure of the roughness. The contact angle varies along the contact line for each metastable state. The maximum and minimum contact angles among the multiple metastable states at a fixed viewing angle correspond to the advancing and the receding contact angles, respectively. It is interesting to observe that the advancing/receding contact angles (and contact angle hysteresis) are a function of viewing angle. In addition, the receding (or advancing) contact angles at different viewing angles are determined at different metastable states. The contact angle of minimum energy among the multiple metastable states is defined as the most stable (equilibrium) contact angle. The Wenzel model is not able to describe the contact angle along the three-phase contact line. The contact angle hysteresis at different drop volumes is determined. The number of the metastable states increases with increasing drop volume. Drop volume effect on the contact angles is also discussed. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Okamura, Jun-Ya; Yamaguchi, Reona; Honda, Kazunari; Wang, Gang; Tanaka, Keiji
2014-11-05
One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. Copyright © 2014 the authors 0270-6474/14/3415047-13$15.00/0.
Array Of Sensors Measures Broadband Radiation
NASA Technical Reports Server (NTRS)
Hoffman, James W.; Grush, Ronald G.
1994-01-01
Multiple broadband radiation sensors aimed at various portions of total field of view. All sensors mounted in supporting frame, serving as common heat sink and temperature reference. Each sensor includes heater winding and differential-temperature-sensing bridge circuit. Power in heater winding adjusted repeatedly in effort to balance bridge circuit. Intended to be used aboard satellite in orbit around Earth to measure total radiation emitted, at various viewing angles, by mosaic of "footprint" areas (each defined by its viewing angle) on surface of Earth. Modified versions of array useful for angle-resolved measurements of broadband radiation in laboratory and field settings on Earth.
On Local Ionization Equilibrium and Disk Winds in QSOs
NASA Astrophysics Data System (ADS)
Pereyra, Nicolas A.
2014-11-01
We present theoretical C IV λλ1548,1550 absorption line profiles for QSOs calculated assuming the accretion disk wind (ADW) scenario. The results suggest that the multiple absorption troughs seen in many QSOs may be due to the discontinuities in the ion balance of the wind (caused by X-rays), rather than discontinuities in the density/velocity structure. The profiles are calculated from a 2.5-dimensional time-dependent hydrodynamic simulation of a line-driven disk wind for a typical QSO black hole mass, a typical QSO luminosity, and for a standard Shakura-Sunyaev disk. We include the effects of ionizing X-rays originating from within the inner disk radius by assuming that the wind is shielded from the X-rays from a certain viewing angle up to 90° ("edge on"). In the shielded region, we assume constant ionization equilibrium, and thus constant line-force parameters. In the non-shielded region, we assume that both the line-force and the C IV populations are nonexistent. The model can account for P-Cygni absorption troughs (produced at edge on viewing angles), multiple absorption troughs (produced at viewing angles close to the angle that separates the shielded region and the non-shielded region), and for detached absorption troughs (produced at an angle in between the first two absorption line types); that is, the model can account for the general types of broad absorption lines seen in QSOs as a viewing angle effect. The steady nature of ADWs, in turn, may account for the steady nature of the absorption structure observed in multiple-trough broad absorption line QSOs. The model parameters are M bh = 109 M ⊙ and L disk = 1047 erg s-1.
Shuttle imaging radar views the Earth from Challenger: The SIR-B experiment
NASA Technical Reports Server (NTRS)
Ford, J. P.; Cimino, J. B.; Holt, B.; Ruzek, M. R.
1986-01-01
In October 1984, SIR-B obtained digital image data of about 6.5 million km2 of the Earth's surface. The coverage is mostly of selected experimental test sites located between latitudes 60 deg north and 60 deg south. Programmed adjustments made to the look angle of the steerable radar antenna and to the flight attitude of the shuttle during the mission permitted collection of multiple-incidence-angle coverage or extended mapping coverage as required for the experiments. The SIR-B images included here are representative of the coverage obtained for scientific studies in geology, cartography, hydrology, vegetation cover, and oceanography. The relations between radar backscatter and incidence angle for discriminating various types of surfaces, and the use of multiple-incidence-angle SIR-B images for stereo measurement and viewing, are illustrated with examples. Interpretation of the images is facilitated by corresponding images or photographs obtained by different sensors or by sketch maps or diagrams.
Apparatus and method for high dose rate brachytherapy radiation treatment
Macey, Daniel J.; Majewski, Stanislaw; Weisenberger, Andrew G.; Smith, Mark Frederick; Kross, Brian James
2005-01-25
A method and apparatus for the in vivo location and tracking of a radioactive seed source during and after brachytherapy treatment. The method comprises obtaining multiple views of the seed source in a living organism using: 1) a single PSPMT detector that is exposed through a multiplicity of pinholes thereby obtaining a plurality of images from a single angle; 2) a single PSPMT detector that may obtain an image through a single pinhole or a plurality of pinholes from a plurality of angles through movement of the detector; or 3) a plurality of PSPMT detectors that obtain a plurality of views from different angles simultaneously or virtually simultaneously. The plurality of images obtained from these various techniques, through angular displacement of the various acquired images, provide the information required to generate the three dimensional images needed to define the location of the radioactive seed source within the body of the living organism.
Scalable screen-size enlargement by multi-channel viewing-zone scanning holography.
Takaki, Yasuhiro; Nakaoka, Mitsuki
2016-08-08
Viewing-zone scanning holographic displays can enlarge both the screen size and the viewing zone. However, limitations exist in the screen size enlargement process even if the viewing zone is effectively enlarged. This study proposes a multi-channel viewing-zone scanning holographic display comprising multiple projection systems and a planar scanner to enable the scalable enlargement of the screen size. Each projection system produces an enlarged image of the screen of a MEMS spatial light modulator. The multiple enlarged images produced by the multiple projection systems are seamlessly tiled on the planar scanner. This screen size enlargement process reduces the viewing zones of the projection systems, which are horizontally scanned by the planar scanner comprising a rotating off-axis lens and a vertical diffuser to enlarge the viewing zone. A screen size of 7.4 in. and a viewing-zone angle of 43.0° are demonstrated.
Forward multiple scattering corrections as function of detector field of view
NASA Astrophysics Data System (ADS)
Zardecki, A.; Deepak, A.
1983-06-01
The theoretical formulations are given for an approximate method based on the solution of the radiative transfer equation in the small angle approximation. The method is approximate in the sense that an approximation is made in addition to the small angle approximation. Numerical results were obtained for multiple scattering effects as functions of the detector field of view, as well as the size of the detector's aperture for three different values of the optical depth tau (=1.0, 4.0 and 10.0). Three cases of aperture size were considered--namely, equal to or smaller or larger than the laser beam diameter. The contrast between the on-axis intensity and the received power for the last three cases is clearly evident.
NASA Astrophysics Data System (ADS)
Makino, T.; Okamoto, H.; Sato, K.; Tanaka, K.; Nishizawa, T.; Sugimoto, N.; Matsui, I.; Jin, Y.; Uchiyama, A.; Kudo, R.
2014-12-01
We have developed a new type of ground-based lidar, Multi-Field of view-Multiple-Scattering-Polarization Lidar (MFMSPL), to analyze multiple scattering contribution due to low-level clouds. One issue of the ground based lidar is the limitation of optical thickness of about 3 due to the strong attenuation in the lidar signals so that only the cloud bottom part can be observed. In order to overcome the problem, we have proposed the MFMSPL that has been designed to observe similar degree of multiple scattering contribution expected from space-borne lidar CALIOP on CALIPSO satellite. The system consists of eight detectors; four telescopes for parallel channels and four for perpendicular channels. The four pairs of telescope have been mounted with four different off-beam angles, ranging from -5 to 35mrad, where the angle is defined as the one between the direction of laser beam and the direction of telescope. Consequently, similar large foot print (100m) as CALIOP can be achieved in the MFMSPL observations when the altitude of clouds is located at about 1km. The use of multi-field of views enables to measure depolarization ratio from optically thick clouds. The outer receivers attached with larger angles generally detect backscattered signals from clouds located at upper altitudes due to the enhanced multiple scattering compared with the inner receiver that detects signals only from cloud bottom portions. Therefore the information of cloud microphysics from optically thicker regions is expected by the MFMSPL observations compared with the conventional lidar with small FOV. The MFMSPL have been continuously operated in Tsukuba, Japan since June 2014.Initial analyses have indicated expected performances from the theoretical estimation by backward Monte-Carlo simulations. The depolarization ratio from deeper part of the clouds detected by the receiver with large off-beam angle showed much larger values than those from the one with small angle. The calibration procedures and summary of initial observations will be presented. The observed data obtained by the MFMSPL will be used to develop and evaluate the retrieval algorithms for cloud microphysics applied to the CALIOP data.
NASA Technical Reports Server (NTRS)
Buckey, J. C.; Beattie, J. M.; Gaffney, F. A.; Nixon, J. V.; Blomqvist, C. G.
1984-01-01
Accurate, reproducible, and non-invasive means for ventricular volume determination are needed for evaluating cardiovascular function zero-gravity. Current echocardiographic methods, particularly for the right ventricle, suffer from a large standard error. A new mathematical approach, recently described by Watanabe et al., was tested on 1 normal formalin-fixed human hearts suspended in a mineral oil bath. Volumes are estimated from multiple two-dimensional echocardiographic views recorded from a single point at sequential angles. The product of sectional cavity area and center of mass for each view summed over the range of angles (using a trapezoidal rule) gives volume. Multiple (8-14) short axis right ventricle and left ventricle views at 5.0 deg intervals were videotaped. The images were digitized by two independent observers (leading-edge to leading-edge technique) and analyzed using a graphics tablet and microcomputer. Actual volumes were determined by filling the chambers with water. These data were compared to the mean of the two echo measurements.
Expansion of the visual angle of a car rear-view image via an image mosaic algorithm
NASA Astrophysics Data System (ADS)
Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng
2015-05-01
The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.
Interactive Visualization of DGA Data Based on Multiple Views
NASA Astrophysics Data System (ADS)
Geng, Yujie; Lin, Ying; Ma, Yan; Guo, Zhihong; Gu, Chao; Wang, Mingtao
2017-01-01
The commission and operation of dissolved gas analysis (DGA) online monitoring makes up for the weakness of traditional DGA method. However, volume and high-dimensional DGA data brings a huge challenge for monitoring and analysis. In this paper, we present a novel interactive visualization model of DGA data based on multiple views. This model imitates multi-angle analysis by combining parallel coordinates, scatter plot matrix and data table. By offering brush, collaborative filter and focus + context technology, this model provides a convenient and flexible interactive way to analyze and understand the DGA data.
NASA Astrophysics Data System (ADS)
Miller, I.; Forster, B. C.; Laffan, S. W.
2012-07-01
Spectral reflectance characteristics of substrates in a coral reef environment are often measured in the field by viewing a substrate at nadir. However, viewing a substrate from multiple angles would likely result in different spectral characteristics for most coral reef substrates and provide valuable information on structural properties. To understand the relationship between the morphology of a substrate and its spectral response it is necessary to correct the observed above-water radiance for the effects of atmosphere and water attenuation, at a number of view and azimuth angles. In this way the actual surface reflectance can be determined. This research examines the air-water surface interaction for two hypothetical atmospheric conditions (clear Rayleigh scattering and totally cloudcovered) and the global irradiance reaching the benthic surface. It accounts for both water scattering and absorption, with simplifications for shallow water conditions, as well as the additive effect of background reflectance being reflected at the water-air surface at angles greater than the critical refraction angle (~48°). A model was developed to correct measured above-water radiance along the refracted view angle for its decrease due to path attenuation and the "n squared law of radiance" and the additive surface reflectance. This allows bidirectional benthic surface reflectance and nadir-normalised reflectance to be determined. These theoretical models were adapted to incorporate above-water measures relative to a standard, diffuse, white reference panel. The derived spectral signatures of a number of coral and non-coral benthic surfaces compared well with other published results, and the signatures and nadir normalised reflectance of the corals and other benthic surface classes indicate good class separation.
NASA Astrophysics Data System (ADS)
Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.
2009-09-01
The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.
Complete 360° circumferential SSOCT gonioscopy of the iridocorneal angle
NASA Astrophysics Data System (ADS)
McNabb, Ryan P.; Kuo, Anthony N.; Izatt, Joseph A.
2014-02-01
The ocular iridocorneal angle is generally an optically inaccessible area when viewed directly through the cornea due to the high angle of incidence required and the large index of refraction difference between air and cornea (nair = 1.000 and ncornea = 1.376) resulting in total internal reflection. Gonioscopy allows for viewing of the angle by removing the aircornea interface through the use of a special contact lens on the eye. Gonioscopy is used clinically to visualize the angle directly but only en face. Optical coherence tomography (OCT) has been used to image the angle and deeper structures via an external approach. Typically, this imaging technique is performed by utilizing a conventional anterior segment OCT scanning system. However, instead of imaging the apex of the cornea, either the scanner or the subject is tilted such that the corneoscleral limbus is orthogonal to the optical axis of the scanner requiring multiple volumes to obtain complete circumferential coverage of the ocular angle. We developed a novel gonioscopic OCT (GOCT) system that images the entire ocular angle within a single volume via an "internal" approach through the use of a custom radially symmetric gonioscopic contact lens. We present, to our knowledge, the first complete 360° circumferential volumes of the iridocorneal angle from a direct, internal approach.
Limited Angle Dual Modality Breast Imaging
NASA Astrophysics Data System (ADS)
More, Mitali J.; Li, Heng; Goodale, Patricia J.; Zheng, Yibin; Majewski, Stan; Popov, Vladimir; Welch, Benjamin; Williams, Mark B.
2007-06-01
We are developing a dual modality breast scanner that can obtain x-ray transmission and gamma ray emission images in succession at multiple viewing angles with the breast held under mild compression. These views are reconstructed and fused to obtain three-dimensional images that combine structural and functional information. Here, we describe the dual modality system and present results of phantom experiments designed to test the system's ability to obtain fused volumetric dual modality data sets from a limited number of projections, acquired over a limited (less than 180 degrees) angular range. We also present initial results from phantom experiments conducted to optimize the acquisition geometry for gamma imaging. The optimization parameters include the total number of views and the angular range over which these views should be spread, while keeping the total number of detected counts fixed. We have found that in general, for a fixed number of views centered around the direction perpendicular to the direction of compression, in-plane contrast and SNR are improved as the angular range of the views is decreased. The improvement in contrast and SNR with decreasing angular range is much greater for deeper lesions and for a smaller number of views. However, the z-resolution of the lesion is significantly reduced with decreasing angular range. Finally, we present results from limited angle tomography scans using a system with dual, opposing heads.
Astronomy in Denver: Polarization of bow shock nebulae around massive stars
NASA Astrophysics Data System (ADS)
Shrestha, Manisha; Hoffman, Jennifer L.; Ignace, Richard; Neilson, Hilding; Richard Ignace
2018-06-01
Stellar wind bow shocks are structures created when stellar winds with supersonic relative velocities interact with the local interstellar medium (ISM). They can be studied to understand the properties of stars as well as the ISM. Since bow shocks are asymmetric, light becomes polarized by scattering in the regions of enhanced density they create. We use a Monte Carlo radiative transfer code calle SLIP to simulate the polarization signatures produced by both resolved and unresolved bow shocks with analytically derived shapes and density structures. When electron scattering is the polarizing mechanism, we find that optical depth plays an important role in the polarization signatures. While results for low optical depths reproduce theoretical predictions, higher optical depths produce higher polarization and position angle rotations at specific viewing angles. This is due to the geometrical properties of the bow shock along with multiple scattering effects. For dust scattering, we find that the polarization signature is strongly affected by wavelength, dust size, dust composition, and viewing angle. Depending on the viewing angle, the polarization magnitude may increase or decrease as a function of wavelength. We will present results from these simulations and preliminary comparisons with observational data.
Study of the retardance of a birefringent waveplate at tilt incidence by Mueller matrix ellipsometer
NASA Astrophysics Data System (ADS)
Gu, Honggang; Chen, Xiuguo; Zhang, Chuanwei; Jiang, Hao; Liu, Shiyuan
2018-01-01
Birefringent waveplates are indispensable optical elements for polarization state modification in various optical systems. The retardance of a birefringent waveplate will change significantly when the incident angle of the light varies. Therefore, it is of great importance to study such field-of-view errors on the polarization properties, especially the retardance of a birefringent waveplate, for the performance improvement of the system. In this paper, we propose a generalized retardance formula at arbitrary incidence and azimuth for a general plane-parallel composite waveplate consisting of multiple aligned single waveplates. An efficient method and corresponding experimental set-up have been developed to characterize the retardance versus the field-of-view angle based on a constructed spectroscopic Mueller matrix ellipsometer. Both simulations and experiments on an MgF2 biplate over an incident angle of 0°-8° and an azimuthal angle of 0°-360° are presented as an example, and the dominant experimental errors are discussed and corrected. The experimental results strongly agree with the simulations with a maximum difference of 0.15° over the entire field of view, which indicates the validity and great potential of the presented method for birefringent waveplate characterization at tilt incidence.
Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads
Lu, Ming-Lun; Waters, Thomas; Werren, Dwight
2015-01-01
Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435
Coronary artery stenosis detection with holographic display of 3D angiograms
NASA Astrophysics Data System (ADS)
Ritman, Erik L.; Schwanke, Todd D.; Simari, Robert D.; Schwartz, Robert S.; Thomas, Paul J.
1995-05-01
The objective of this study was to establish the accuracy of an holographic display approach for detection of stenoses in coronary arteries. The rationale for using an holographic display approach is that multiple angles of view of the coronary arteriogram are provided by a single 'x-ray'-like film, backlit by a special light box. This should be more convenient in that the viewer does not have to page back and forth through a cine angiogram to obtain the multiple angles of view. The method used to test this technique involved viewing 100 3D coronary angiograms. These images were generated from the 3D angiographic images of nine normal coronary arterial trees generated with the Dynamic Spatial Reconstructor (DSR) fast CT scanner. Using our image processing programs, the image of the coronary artery lumen was locally 'narrowed' by an amount and length and at a location determined by a random look-up table. Two independent, blinded, experienced angiographers viewed the holographic displays of these angiograms and recorded their confidence about the presence, location, and severity of the stenoses. This procedure evaluates the sensitivity and specificity of the detection of coronary artery stenoses as a function of the severity, size, and location along the arteries.
NASA Technical Reports Server (NTRS)
Valdez, P. F.; Donohoe, G. W.
1997-01-01
Statistical classification of remotely sensed images attempts to discriminate between surface cover types on the basis of the spectral response recorded by a sensor. It is well known that surfaces reflect incident radiation as a function of wavelength producing a spectral signature specific to the material under investigation. Multispectral and hyperspectral sensors sample the spectral response over tens and even hundreds of wavelength bands to capture the variation of spectral response with wavelength. Classification algorithms then exploit these differences in spectral response to distinguish between materials of interest. Sensors of this type, however, collect detailed spectral information from one direction (usually nadir); consequently, do not consider the directional nature of reflectance potentially detectable at different sensor view angles. Improvements in sensor technology have resulted in remote sensing platforms capable of detecting reflected energy across wavelengths (spectral signatures) and from multiple view angles (angular signatures) in the fore and aft directions. Sensors of this type include: the moderate resolution imaging spectroradiometer (MODIS), the multiangle imaging spectroradiometer (MISR), and the airborne solid-state array spectroradiometer (ASAS). A goal of this paper, then, is to explore the utility of Bidirectional Reflectance Distribution Function (BRDF) models in the selection of optimal view angles for the classification of remotely sensed images by employing a strategy of searching for the maximum difference between surface BRDFs. After a brief discussion of directional reflect ante in Section 2, attention is directed to the Beard-Maxwell BRDF model and its use in predicting the bidirectional reflectance of a surface. The selection of optimal viewing angles is addressed in Section 3, followed by conclusions and future work in Section 4.
Pixel-level tunable liquid crystal lenses for auto-stereoscopic display
NASA Astrophysics Data System (ADS)
Li, Kun; Robertson, Brian; Pivnenko, Mike; Chu, Daping; Zhou, Jiong; Yao, Jun
2014-02-01
Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results.
View angle effect in LANDSAT imagery
NASA Technical Reports Server (NTRS)
Kaneko, T.; Engvall, J. L.
1977-01-01
The view angle effect in LANDSAT 2 imagery was investigated. The LANDSAT multispectral scanner scans over a range of view angles of -5.78 to 5.78 degrees. The view angle effect, which is caused by differing view angles, could be studied by comparing data collected at different view angles over a fixed location at a fixed time. Since such LANDSAT data is not available, consecutive day acquisition data were used as a substitute: they were collected over the same geographical location, acquired 24 hours apart, with a view angle change of 7 to 8 degrees at a latitude of 35 to 45 degrees. It is shown that there is approximately a 5% reduction in the average sensor response on the second-day acquisitions as compared with the first-day acquisitions, and that the view angle effect differs field to field and crop to crop. On false infrared color pictures the view angle effect causes changes primarily in brightness and to a lesser degree in color (hue and saturation). An implication is that caution must be taken when images with different view angles are combined for classification and a signature extension technique needs to take the view angle effect into account.
Sun-view angle effects on reflectance factors of corn canopies
NASA Technical Reports Server (NTRS)
Ranson, K. J.; Daughtry, C. S. T.; Biehl, L. L.; Bauer, M. E.
1985-01-01
The effects of sun and view angles on reflectance factors of corn (Zea mays L.) canopies ranging from the six leaf stage to harvest maturity were studied on the Purdue University Agronomy Farm by a multiband radiometer. The two methods of acquiring spectral data, the truck system and the tower systrem, are described. The analysis of the spectral data is presented in three parts: solar angle effects on reflectance factors viewed at nadir; solar angle effects on reflectance factors viewed at a fixed sun angle; and both sun and view angles effect on reflectance factors. The analysis revealed that for nadir-viewed reflectance factors there is a strong solar angle dependence in all spectral bands for canopies with low leaf area index. Reflectance factors observed from the sun angle at different view azimuth angles showed that the position of the sensor relative to the sun is important in determining angular reflectance characteristics. For both sun and view angles, reflectance factors are maximized when the sensor view direction is towards the sun.
Mitigation of tropospheric InSAR phase artifacts through differential multisquint processing
NASA Technical Reports Server (NTRS)
Chen, Curtis W.
2004-01-01
We propose a technique for mitigating tropospheric phase errors in repeat-pass interferometric synthetic aperture radar (InSAR). The mitigation technique is based upon the acquisition of multisquint InSAR data. On each satellite pass over a target area, the radar instrument will acquire images from multiple squint (azimuth) angles, from which multiple interferograms can be formed. The diversity of viewing angles associated with the multisquint acquisition can be used to solve for two components of the 3-D surface displacement vector as well as for the differential tropospheric phase. We describe a model for the performance of the multisquint technique, and we present an assessment of the performance expected.
Partially-overlapped viewing zone based integral imaging system with super wide viewing angle.
Xiong, Zhao-Long; Wang, Qiong-Hua; Li, Shu-Li; Deng, Huan; Ji, Chao-Chao
2014-09-22
In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.
Preferred viewing distance and screen angle of electronic paper displays.
Shieh, Kong-King; Lee, Der-Song
2007-09-01
This study explored the viewing distance and screen angle for electronic paper (E-Paper) displays under various light sources, ambient illuminations, and character sizes. Data analysis showed that the mean viewing distance and screen angle were 495 mm and 123.7 degrees. The mean viewing distances for Kolin Chlorestic Liquid Crystal display was 500 mm, significantly longer than Sony electronic ink display, 491 mm. Screen angle for Kolin was 127.4 degrees, significantly greater than that of Sony, 120.0 degrees. Various light sources revealed no significant effect on viewing distances; nevertheless, they showed significant effect on screen angles. The screen angle for sunlight lamp (D65) was similar to that of fluorescent lamp (TL84), but greater than that of tungsten lamp (F). Ambient illumination and E-paper type had significant effects on viewing distance and screen angle. The higher the ambient illumination was, the longer the viewing distance and the lesser the screen angle. Character size had significant effect on viewing distances: the larger the character size, the longer the viewing distance. The results of this study indicated that the viewing distance for E-Paper was similar to that of visual display terminal (VDT) at around 500 mm, but greater than normal paper at about 360 mm. The mean screen angle was around 123.7 degrees, which in terms of viewing angle is 29.5 degrees below horizontal eye level. This result is similar to the general suggested viewing angle between 20 degrees and 50 degrees below the horizontal line of sight.
Multi-Objective Optimization of Spacecraft Trajectories for Small-Body Coverage Missions
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2017-01-01
Visual coverage of surface elements of a small-body object requires multiple images to be taken that meet many requirements on their viewing angles, illumination angles, times of day, and combinations thereof. Designing trajectories capable of maximizing total possible coverage may not be useful since the image target sequence and the feasibility of said sequence given the rotation-rate limitations of the spacecraft are not taken into account. This work presents a means of optimizing, in a multi-objective manner, surface target sequences that account for such limitations.
NASA Technical Reports Server (NTRS)
Fymat, A. L.
1975-01-01
Instrument is based on inverse solution ot equations for light scattered by a transparent medium. Measurements are taken over several angles of incidence rather than over several frequencies. Measurements can be used to simultaneously determine chemical and physical properties of particles in mixed gas or liquid.
Bidirectional measurements of surface reflectance for view angle corrections of oblique imagery
NASA Technical Reports Server (NTRS)
Jackson, R. D.; Teillet, P. M.; Slater, P. N.; Fedosejevs, G.; Jasinski, Michael F.
1990-01-01
An apparatus for acquiring bidirectional reflectance-factor data was constructed and used over four surface types. Data sets were obtained over a headed wheat canopy, bare soil having several different roughness conditions, playa (dry lake bed), and gypsum sand. Results are presented in terms of relative bidirectional reflectance factors (BRFs) as a function of view angle at a number of solar zenith angles, nadir BRFs as a function of solar zenith angles, and, for wheat, vegetation indices as related to view and solar zenith angles. The wheat canopy exhibited the largest BRF changes with view angle. BRFs for the red and the NIR bands measured over wheat did not have the same relationship with view angle. NIR/Red ratios calculated from nadir BRFs changed by nearly a factor of 2 when the solar zenith angle changed from 20 to 50 degs. BRF versus view angle relationships were similar for soils having smooth and intermediate rough surfaces but were considerably different for the roughest surface. Nadir BRF versus solar-zenith angle relationships were distinctly different for the three soil roughness levels. Of the various surfaces, BRFs for gypsum sand changed the least with view angle (10 percent at 30 degs).
NASA Astrophysics Data System (ADS)
Markiet, Vincent; Perheentupa, Viljami; Mõttus, Matti; Hernández-Clemente, Rocío
2016-04-01
Imaging spectroscopy is a remote sensing technology which records continuous spectral data at a very high (better than 10 nm) resolution. Such spectral images can be used to monitor, for example, the photosynthetic activity of vegetation. Photosynthetic activity is dependent on varying light conditions and varies within the canopy. To measure this variation we need very high spatial resolution data with resolution better than the dominating canopy element size (e.g., tree crown in a forest canopy). This is useful, e.g., for detecting photosynthetic downregulation and thus plant stress. Canopy illumination conditions are often quantified using the shadow fraction: the fraction of visible foliage which is not sunlit. Shadow fraction is known to depend on view angle (e.g., hot spot images have very low shadow fraction). Hence, multiple observation angles potentially increase the range of shadow fraction in the imagery in high spatial resolution imaging spectroscopy data. To investigate the potential of multi-angle imaging spectroscopy in investigating canopy processes which vary with shadow fraction, we obtained a unique multiangular airborne imaging spectroscopy data for the Hyytiälä forest research station located in Finland (61° 50'N, 24° 17'E) in July 2015. The main tree species are Norway spruce (Picea abies L. karst), Scots pine (Pinus sylvestris L.) and birch (Betula pubescens Ehrh., Betula pendula Roth). We used an airborne hyperspectral sensor AISA Eagle II (Specim - Spectral Imaging Ltd., Finland) mounted on a tilting platform. The tilting platform allowed us to measure at nadir and approximately 35 degrees off-nadir. The hyperspectral sensor has a 37.5 degrees field of view (FOV), 0.6m pixel size, 128 spectral bands with an average spectral bandwidth of 4.6nm and is sensitive in the 400-1000 nm spectral region. The airborne data was radiometrically, atmospherically and geometrically processed using the Parge and Atcor software (Re Se applications Schläpfer, Switzerland). However, even after meticulous geolocation, the canopy elements (needles) seen from the three view angles were different: at each overpass, different parts of the same crowns were observed. To overcome this, we used a 200m x 200m test site covered with pure pine stands. We assumed that for sunlit, shaded and understory spectral signatures are independent of viewing direction to the accuracy of a constant BRDF factor. Thus, we compared the spectral signatures for sunlit and shaded canopy and understory obtained for each view direction. We selected visually six hundred of the brightest and darkest canopy pixels. Next, we performed a minimum noise fraction (MNF) transformation, created a pixel purity index (PPI) and used Envi's n-D scatterplot to determine pure spectral signatures for the two classes. The pure endmembers for different view angles were compared to determine the BRDF factor and to analyze its spectral invariance. We demonstrate the compatibility of multi-angle data with high spatial resolution data. In principle, both carry similar information on structured (non-flat) targets thus as a vegetation canopy. Nevertheless, multiple view angles helped us to extend the range of shadow fraction in the images. Also, correct separation of shaded crown and shaded understory pixels remains a challenge.
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator)
1979-01-01
The spatial characteristics of the data were evaluated. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. The potential need for some level of radiometric adjustment was evidenced by an along track band of high reflectance across different cover types in the Varian imagery. A multiple regression analysis was employed to explore the viewing angle effect on measured reflectance. Areas in the data set which appeared to have no across track stratification of cover type were identified. A program was developed which computed the average reflectance by column for each channel, over all of the scan lines in the designated areas. A regression analysis was then run using the first, second, and third degree polynomials, for each channel. An atmospheric effect as a component of the viewing angle source of variance is discussed. Cover type maps were completed and training and test field selection was initiated.
Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji
2010-01-01
One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.
Wada, Keizo; Hamada, Daisuke; Tamaki, Shunsuke; Higashino, Kosaku; Fukui, Yoshihiro; Sairyo, Koichi
2017-01-01
Previous studies suggested that changes in kinematics in total knee arthroplasty (TKA) affected satisfaction level. The aim of this cadaveric study was to evaluate the effect of medial collateral ligament (MCL) release by multiple needle puncture on knee rotational kinematics in posterior-stabilized TKA. Six fresh, frozen cadaveric knees were included in this study. All TKA procedures were performed with an image-free navigation system using a 10-mm polyethylene insert. Tibial internal rotation was assessed to evaluate intraoperative knee kinematics. Multiple needle puncturing was performed 5, 10, and 15 times for the hard portion of the MCL at 90° knee flexion. Kinematic analysis was performed after every 5 punctures. After performing 15 punctures, a 14-mm polyethylene insert was inserted, and kinematic analysis was performed. The tibial internal rotation angle at maximum knee flexion without multiple needle puncturing was significantly larger (9.42°) than that after 15 punctures (3°). Negative correlation (Pearson r = -0.715, P < .001) between tibial internal rotation angle at maximum knee flexion and frequency of puncture was observed. The tibial internal rotation angle with a 14-mm insert was significantly larger (7.25°) compared with the angle after 15 punctures. Tibial internal rotation during knee flexion was reduced by extensive MCL release using multiple needle puncturing and was recovered by increasing of medial tightness. From the point of view of knee kinematics, medial tightness should be allowed to maintain the internal rotation angle of the tibia during knee flexion which might lead to patient satisfaction. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baek, Jong-In; Kim, Ki-Han; Kim, Jae Chang; Yoon, Tae-Hoon
2010-01-01
This paper proposes a method of omni-directional viewing-angle switching by controlling the beam diverging angle (BDA) in a liquid crystal (LC) panel. The LCs aligned randomly by in-cell polymer structures diffuse the collimated backlight for the bright state of the wide viewing-angle mode. We align the LCs homogeneously by applying an in-plane field for the narrow viewing-angle mode. By doing this the scattering is significantly reduced so that the small BDA is maintained as it passes through the LC layer. The dark state can be obtained by aligning the LCs homeotropically with a vertical electric field. We demonstrated experimentally the omni-directional switching of the viewing-angle, without an additional panel or backlighting system.
Limited-angle tomography for analyzer-based phase-contrast X-ray imaging
Majidi, Keivan; Wernick, Miles N; Li, Jun; Muehleman, Carol; Brankov, Jovan G
2014-01-01
Multiple-Image Radiography (MIR) is an analyzer-based phase-contrast X-ray imaging method (ABI), which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume. PMID:24898008
Limited-angle tomography for analyzer-based phase-contrast x-ray imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Wernick, Miles N.; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-07-01
Multiple-image radiography (MIR) is an analyzer-based phase-contrast x-ray imaging method, which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume.
Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng
2016-12-01
In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gaze and viewing angle influence visual stabilization of upright posture
Ustinova, KI; Perkins, J
2011-01-01
Focusing gaze on a target helps stabilize upright posture. We investigated how this visual stabilization can be affected by observing a target presented under different gaze and viewing angles. In a series of 10-second trials, participants (N = 20, 29.3 ± 9 years of age) stood on a force plate and fixed their gaze on a figure presented on a screen at a distance of 1 m. The figure changed position (gaze angle: eye level (0°), 25° up or down), vertical body orientation (viewing angle: at eye level but rotated 25° as if leaning toward or away from the participant), or both (gaze and viewing angle: 25° up or down with the rotation equivalent of a natural visual perspective). Amplitude of participants’ sagittal displacement, surface area, and angular position of the center of gravity (COG) were compared. Results showed decreased COG velocity and amplitude for up and down gaze angles. Changes in viewing angles resulted in altered body alignment and increased amplitude of COG displacement. No significant changes in postural stability were observed when both gaze and viewing angles were altered. Results suggest that both the gaze angle and viewing perspective may be essential variables of the visuomotor system modulating postural responses. PMID:22398978
Dual-mode switching of a liquid crystal panel for viewing angle control
NASA Astrophysics Data System (ADS)
Baek, Jong-In; Kwon, Yong-Hoan; Kim, Jae Chang; Yoon, Tae-Hoon
2007-03-01
The authors propose a method to control the viewing angle of a liquid crystal (LC) panel using dual-mode switching. To realize both wide viewing angle (WVA) characteristics and narrow viewing angle (NVA) characteristics with a single LC panel, the authors use two different dark states. The LC layer can be aligned homogeneously parallel to the transmission axis of the bottom polarizer for WVA dark state operation, while it can be aligned vertically for NVA dark state operation. The authors demonstrated that viewing angle control can be achieved with a single panel without any loss of contrast at the front.
NASA Technical Reports Server (NTRS)
vanLeeuwen, W. J. D.; Huete, A. R.; Duncan, J.; Franklin, J.
1994-01-01
A shrub savannah landscape in Niger was optically characterized utilizing blue, green, red and near-infrared wavelengths. Selected vegetation indices were evaluated for their performance and sensitivity to describe the complex Sahelian soil/vegetation canopies. Bidirectional reflectance factors (BRF) of plants and soils were measured at several view angles, and used as input to various vegetation indices. Both soil and vegetation targets had strong anisotropic reflectance properties, rendering all vegetation index (6) responses to be a direct function of sun and view geometry. Soil background influences were shown to alter the response of most vegetation indices. N-space greenness had the smallest dynamic range in VI response, but the n-space brightness index provided additional useful information. The global environmental monitoring index (GEMI) showed a large 6 dynamic range for bare soils, which was undesirable for a vegetation index. The view angle response of the normalized difference vegetation index (NDVI), atmosphere resistant vegetation index (ARVI) and soil atmosphere resistant vegetation index (SARVI) were asymmetric about nadir for multiple view angles, and were, except for the SARVI, altered seriously by soil moisture and/or soil brightness effects. The soil adjusted vegetation index (SAVI) was least affected by surface soil moisture and was symmetric about nadir for grass vegetation covers. Overall the SAVI, SARVI and the n-space vegetation index performed best under all adverse conditions and were recommended to monitor vegetation growth in the sparsely vegetated Sahelian zone.
Effect of image scaling on stereoscopic movie experience
NASA Astrophysics Data System (ADS)
Häkkinen, Jukka P.; Hakala, Jussi; Hannuksela, Miska; Oittinen, Pirkko
2011-03-01
Camera separation affects the perceived depth in stereoscopic movies. Through control of the separation and thereby the depth magnitudes, the movie can be kept comfortable but interesting. In addition, the viewing context has a significant effect on the perceived depth, as a larger display and longer viewing distances also contribute to an increase in depth. Thus, if the content is to be viewed in multiple viewing contexts, the depth magnitudes should be carefully planned so that the content always looks acceptable. Alternatively, the content can be modified for each viewing situation. To identify the significance of changes due to the viewing context, we studied the effect of stereoscopic camera base distance on the viewer experience in three different situations: 1) small sized video and a viewing distance of 38 cm, 2) television and a viewing distance of 158 cm, and 3) cinema and a viewing distance of 6-19 meters. We examined three different animations with positive parallax. The results showed that the camera distance had a significant effect on the viewing experience in small display/short viewing distance situations, in which the experience ratings increased until the maximum disparity in the scene was 0.34 - 0.45 degrees of visual angle. After 0.45 degrees, increasing the depth magnitude did not affect the experienced quality ratings. Interestingly, changes in the camera distance did not affect the experience ratings in the case of television or cinema if the depth magnitudes were below one degree of visual angle. When the depth was greater than one degree, the experience ratings began to drop significantly. These results indicate that depth magnitudes have a larger effect on the viewing experience with a small display. When a stereoscopic movie is viewed from a larger display, other experiences might override the effect of depth magnitudes.
Polarimetric Imaging for the Detection of Disturbed Surfaces
2009-06-01
9 Figure 4. Rayleigh Roughness Criterion as a Function of Incident Angle ......................10 Figure 5. Definition of Geometrical...Terms (after Egan & Hallock, 1966).....................11 Figure 6. Haleakala Ash Depolarization for (a) °0 Viewing Angle and (b) °60 Viewing... Angle (from Egan et al., 1968)..........................................................13 Figure 7. Basalt Depolarization at (a) °0 Viewing Angle and
Kim, Hwi; Hahn, Joonku; Choi, Hee-Jin
2011-04-10
We investigate the viewing angle enhancement of a lenticular three-dimensional (3D) display with a triplet lens array. The theoretical limitations of the viewing angle and view number of the lenticular 3D display with the triplet lens array are analyzed numerically. For this, the genetic-algorithm-based design method of the triplet lens is developed. We show that a lenticular 3D display with viewing angle of 120° and 144 views without interview cross talk can be realized with the use of an optimally designed triplet lens array. © 2011 Optical Society of America
Multi-layer Clouds Over the South Indian Ocean
NASA Technical Reports Server (NTRS)
2003-01-01
The complex structure and beauty of polar clouds are highlighted by these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean, to the north of Enderbyland, East Antarctica.The image at left was created by overlying a natural-color view from MISR's downward-pointing (nadir) camera with a color-coded stereo height field. MISR retrieves heights by a pattern recognition algorithm that utilizes multiple view angles to derive cloud height and motion. The opacity of the height field was then reduced until the field appears as a translucent wash over the natural-color image. The resulting purple, cyan and green hues of this aesthetic display indicate low, medium or high altitudes, respectively, with heights ranging from less than 2 kilometers (purple) to about 8 kilometers (green). In the lower right corner, the edge of the Antarctic coastline and some sea ice can be seen through some thin, high cirrus clouds.The right-hand panel is a natural-color image from MISR's 70-degree backward viewing camera. This camera looks backwards along the path of Terra's flight, and in the southern hemisphere the Sun is in front of this camera. This perspective causes the cloud-tops to be brightly outlined by the sun behind them, and enhances the shadows cast by clouds with significant vertical structure. An oblique observation angle also enhances the reflection of light by atmospheric particles, and accentuates the appearance of polar clouds. The dark ocean and sea ice that were apparent through the cirrus clouds at the bottom right corner of the nadir image are overwhelmed by the brightness of these clouds at the oblique view.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17794. The panels cover an area of 335 kilometers x 605 kilometers, and utilize data from blocks 142 to 145 within World Reference System-2 path 155.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Directional infrared temperature and emissivity of vegetation: Measurements and models
NASA Technical Reports Server (NTRS)
Norman, J. M.; Castello, S.; Balick, L. K.
1994-01-01
Directional thermal radiance from vegetation depends on many factors, including the architecture of the plant canopy, thermal irradiance, emissivity of the foliage and soil, view angle, slope, and the kinetic temperature distribution within the vegetation-soil system. A one dimensional model, which includes the influence of topography, indicates that thermal emissivity of vegetation canopies may remain constant with view angle, or emissivity may increase or decrease as view angle from nadir increases. Typically, variations of emissivity with view angle are less than 0.01. As view angle increases away from nadir, directional infrared canopy temperature usually decreases but may remain nearly constant or even increase. Variations in directional temperature with view angle may be 5C or more. Model predictions of directional emissivity are compared with field measurements in corn canopies and over a bare soil using a method that requires two infrared thermometers, one sensitive to the 8 to 14 micrometer wavelength band and a second to the 14 to 22 micrometer band. After correction for CO2 absorption by the atmosphere, a directional canopy emissivity can be obtained as a function of view angle in the 8 to 14 micrometer band to an accuracy of about 0.005. Modeled and measured canopy emissivities for corn varied slightly with view angle (0.990 at nadir and 0.982 at 75 deg view zenith angle) and did not appear to vary significantly with view angle for the bare soil. Canopy emissivity is generally nearer to unity than leaf emissivity may vary by 0.02 with wavelength even though leaf emissivity. High spectral resolution, canopy thermal emissivity may vary by 0.02 with wavelength even though leaf emissivity may vary by 0.07. The one dimensional model provides reasonably accurate predictions of infrared temperature and can be used to study the dependence of infrared temperature on various plant, soil, and environmental factors.
2007-03-01
front of a large area blackbody as background. The viewing angle , defined as the angle between surface normal and camera line of sight, was varied by...and polarization angle were derived from the Stokes parameters. The dependence of these polarization characteristics on viewing angle was investigated
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
Estimating big bluestem albedo from directional reflectance measurements
NASA Technical Reports Server (NTRS)
Irons, J. R.; Ranson, K. J.; Daughtry, C. S. T.
1988-01-01
Multidirectional reflectance factor measurements acquired in the summer of 1986 are used to make estimates of big bluestem grass albedo, evaluating the variation of albedo with changes in solar zenith angle and phenology. On any given day, the albedo was observed to increase by at least 19 percent as solar zenith angle increased. Changes in albedo were found to correspond to changes in the green leaf area index of the grass canopy. Estimates of albedo made using reflectance data acquired within only one or two azimuthal planes and at a restricted range of view zenith angle were evaluated and compared to 'true' albedos derived from all available reflectance factor data. It was found that even a limited amount of multiple direction reflectance data was preferable to a single nadir reflectance factor for the estimation of prarie grass albedo.
Gravitational Wakes Sizes from Multiple Cassini Radio Occultations of Saturn's Rings
NASA Astrophysics Data System (ADS)
Marouf, E. A.; Wong, K. K.; French, R. G.; Rappaport, N. J.; McGhee, C. A.; Anabtawi, A.
2016-12-01
Voyager and Cassini radio occultation extinction and forward scattering observations of Saturn's C-Ring and Cassini Division imply power law particle size distributions extending from few millimeters to several meters with power law index in the 2.8 to 3.2 range, depending on the specific ring feature. We extend size determination to the elongated and canted particle clusters (gravitational wakes) known to permeate Saturn's A- and B-Rings. We use multiple Cassini radio occultation observations over a range of ring opening angle B and wake viewing angle α to constrain the mean wake width W and thickness/height H, and average ring area coverage fraction. The rings are modeled as randomly blocked diffraction screen in the plane normal to the incidence direction. Collective particle shadows define the blocked area. The screen's transmittance is binary: blocked or unblocked. Wakes are modeled as thin layer of elliptical cylinders populated by random but uniformly distributed spherical particles. The cylinders can be immersed in a "classical" layer of spatially uniformly distributed particles. Numerical simulations of model diffraction patterns reveal two distinct components: cylindrical and spherical. The first dominates at small scattering angles and originates from specific locations within the footprint of the spacecraft antenna on the rings. The second dominates at large scattering angles and originates from the full footprint. We interpret Cassini extinction and scattering observations in the light of the simulation results. We compute and remove contribution of the spherical component to observed scattered signal spectra assuming known particle size distribution. A large residual spectral component is interpreted as contribution of cylindrical (wake) diffraction. Its angular width determines a cylindrical shadow width that depends on the wake parameters (W,H) and the viewing geometry (α,B). Its strength constrains the mean fractional area covered (optical depth), hence constrains the mean wakes spacing. Self-consistent (W,H) are estimated using least-square fit to results from multiple occultations. Example results for observed scattering by several inner A-Ring features suggest particle clusters (wakes) that are few tens of meters wide and several meters thick.
NASA Technical Reports Server (NTRS)
Donovan, Sheila
1985-01-01
A full evaluation of the bidirectional reflectance properties of different vegetated surfaces was limited in past studies by instrumental inadequacies. With the development of the PARABOLA, it is now possible to sample reflectances from a large number of view angles in a short period of time, maintaining an almost constant solar zenith angle. PARABOLA data collected over five different canopies in Texas are analyzed. The objective of this investigation was to evaluate the intercanopy and intracanopy differences in bidirectional reflectance patterns. Particular attention was given to the separability of canopy types using different view angles for the red and the near infrared (NIR) spectral bands. Comparisons were repeated for different solar zenith angles. Statistical and other quantitative techniques were used to assess these differences. For the canopies investigated, the greatest reflectances were found in the backscatter direction for both bands. Canopy discrimination was found to vary with both view angle and the spectral reflectance band considered, the forward scatter view angles being most suited to observations in the NIR and backscatter view angles giving better results in the red band. Because of different leaf angle distribution characteristics, discrimination was found to be better at small solar zenith angles in both spectral bands.
Changes in reflectance anisotropy of wheat crop during different phenophases
NASA Astrophysics Data System (ADS)
Lunagaria, Manoj M.; Patel, Haridas R.
2017-04-01
The canopy structure of wheat changes significantly with growth stages and leads to changes in reflectance anisotropy. Bidirectional reflectance distribution function characterises the reflectance anisotropy of the targets, which can be approximated. Spectrodirectional reflectance measurements on wheat crop were acquired using a field goniometer system. The bidirectional reflectance spectra were acquired at 54 view angles to cover the hemispheric span up to 60° view zenith. The observations were made during early growth stages till maturity of the crop. The anisotropy was not constant for all wavelengths and anisotropic factors clearly revealed spectral dependence, which was more pronounced in near principal plane. In near infrared, wheat canopy expressed less reflectance anisotropy because of higher multiple scattering. The broad hotspot signature was noticeable in reflectance of canopy whenever view and solar angles were close. Distinct changes in bidirectional reflectance distribution function were observed during booting to flowering stages as the canopy achieves more uniformity, height and head emergence. The function clearly reveals bowl shape during heading to early milking growth stages of the crop. Late growth stages show less prominent gap and shadow effects. Anisotropy index revealed that wheat exhibits changes in reflectance anisotropy with phenological development and with spectral bands.
BOREAS RSS-2 Level-1B ASAS Image Data: At-Sensor Radiance in BSQ Format
NASA Technical Reports Server (NTRS)
Russell, C.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Dabney, P. W.; Kovalick, W.; Graham, D.; Bur, Michael; Irons, James R.; Tierney, M.
2000-01-01
The BOREAS RSS-2 team used the ASAS instrument, mounted on the NASA C-130 aircraft, to create at-sensor radiance images of various sites as a function of spectral wavelength, view geometry (combinations of view zenith angle, view azimuth angle, solar zenith angle, and solar azimuth angle), and altitude. The level-1b ASAS images of the BOREAS study areas were collected from April to September 1994 and March to July 1996.
MISR Global Images See the Light of Day
NASA Technical Reports Server (NTRS)
2002-01-01
As of July 31, 2002, global multi-angle, multi-spectral radiance products are available from the MISR instrument aboard the Terra satellite. Measuring the radiative properties of different types of surfaces, clouds and atmospheric particulates is an important step toward understanding the Earth's climate system. These images are among the first planet-wide summary views to be publicly released from the Multi-angle Imaging SpectroRadiometer experiment. Data for these images were collected during the month of March 2002, and each pixel represents monthly-averaged daylight radiances from an area measuring 1/2 degree in latitude by 1/2 degree in longitude.The top panel is from MISR's nadir (vertical-viewing) camera and combines data from the red, green and blue spectral bands to create a natural color image. The central view combines near-infrared, red, and green spectral data to create a false-color rendition that enhances highly vegetated terrain. It takes 9 days for MISR to view the entire globe, and only areas within 8 degrees of latitude of the north and south poles are not observed due to the Terra orbit inclination. Because a single pole-to-pole swath of MISR data is just 400 kilometers wide, multiple swaths must be mosaiced to create these global views. Discontinuities appear in some cloud patterns as a consequence of changes in cloud cover from one day to another.The lower panel is a composite in which red, green, and blue radiances from MISR's 70-degree forward-viewing camera are displayed in the northern hemisphere, and radiances from the 70-degree backward-viewing camera are displayed in the southern hemisphere. At the March equinox (spring in the northern hemisphere, autumn in the southern hemisphere), the Sun is near the equator. Therefore, both oblique angles are observing the Earth in 'forward scattering', particularly at high latitudes. Forward scattering occurs when you (or MISR) observe an object with the Sun at a point in the sky that is in front of you. Relative to the nadir view, this geometry accentuates the appearance of polar clouds, and can even reveal clouds that are invisible in the nadir direction. In relatively clear ocean areas, the oblique-angle composite is generally brighter than its nadir counterpart due to enhanced reflection of light by atmospheric particulates.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Esthetic smile preferences and the orientation of the maxillary occlusal plane.
Kattadiyil, Mathew T; Goodacre, Charles J; Naylor, W Patrick; Maveli, Thomas C
2012-12-01
The anteroposterior orientation of the maxillary occlusal plane has an important role in the creation, assessment, and perception of an esthetic smile. However, the effect of the angle at which this plane is visualized (the viewing angle) in a broad smile has not been quantified. The purpose of this study was to assess the esthetic preferences of dental professionals and nondentists by using 3 viewing angles of the anteroposterior orientation of the maxillary occlusal plane. After Institutional Review Board approval, standardized digital photographic images of the smiles of 100 participants were recorded by simultaneously triggering 3 cameras set at different viewing angles. The top camera was positioned 10 degrees above the occlusal plane (camera #1, Top view); the center camera was positioned at the level of the occlusal plane (camera #2, Center view); and the bottom camera was located 10 degrees below the occlusal plane (camera #3, Bottom view). Forty-two dental professionals and 31 nondentists (persons from the general population) independently evaluated digital images of each participant's smile captured from the Top view, Center view, and Bottom view. The 73 evaluators were asked individually through a questionnaire to rank the 3 photographic images of each patient as 'most pleasing,' 'somewhat pleasing,' or 'least pleasing,' with most pleasing being the most esthetic view and the preferred orientation of the occlusal plane. The resulting esthetic preferences were statistically analyzed by using the Friedman test. In addition, the participants were asked to rank their own images from the 3 viewing angles as 'most pleasing,' 'somewhat pleasing,' and 'least pleasing.' The 73 evaluators found statistically significant differences in the esthetic preferences between the Top and Bottom views and between the Center and Bottom views (P<.001). No significant differences were found between the Top and Center views. The Top position was marginally preferred over the Center, and both were significantly preferred over the Bottom position. When the participants evaluated their own smiles, a significantly greater number (P< .001) preferred the Top view over the Center or the Bottom views. No significant differences were found in preferences based on the demographics of the evaluators when comparing age, education, gender, profession, and race. The esthetic preference for the maxillary occlusal plane was influenced by the viewing angle with the higher (Top) and center views preferred by both dental and nondental evaluators. The participants themselves preferred the higher view of their smile significantly more often than the center or lower angle views (P<.001). Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
CCD Camera Lens Interface for Real-Time Theodolite Alignment
NASA Technical Reports Server (NTRS)
Wake, Shane; Scott, V. Stanley, III
2012-01-01
Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.
Large-viewing-angle electroholography by space projection
NASA Astrophysics Data System (ADS)
Sato, Koki; Obana, Kazuki; Okumura, Toshimichi; Kanaoka, Takumi; Nishikawa, Satoko; Takano, Kunihiko
2004-06-01
The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel ( time shared CGH of RGB three colors ). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.
Takaki, Yasuhiro; Hayashi, Yuki
2008-07-01
The narrow viewing zone angle is one of the problems associated with electronic holography. We propose a technique that enables the ratio of horizontal and vertical resolutions of a spatial light modulator (SLM) to be altered. This technique increases the horizontal resolution of a SLM several times, so that the horizontal viewing zone angle is also increased several times. A SLM illuminated by a slanted point light source array is imaged by a 4f imaging system in which a horizontal slit is located on the Fourier plane. We show that the horizontal resolution was increased four times and that the horizontal viewing zone angle was increased approximately four times.
C-band backscattering from corn canopies
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Ranson, K. J.; Biehl, L. L.
1991-01-01
A frequency-modulatad continuous-wave C-band (4.8 GHz) scatterometer was mounted on an aerial lift truck, and backscatter coefficients of corn (Zea mays L.) were acquired as functions of polarizations, view angles, and row directions. As phytomass and green-leaf area index increased, the backscatter also increased. Near anthesis, when the canopies were fully developed, the major scattering elements were located in the upper 1 m of the 2.8 m tall canopy and little backscatter was measured below that level for view angles of 30 deg or greater. C-band backscatter data could provide information to monitor tillage operations at small view zenith angles and vegetation at large view zenith angles.
The effect of viewing angle on the spectral behavior of a Gd plasma source near 6.7 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Gorman, Colm; Li Bowen; Cummins, Thomas
2012-04-02
We have demonstrated the effect of viewing angle on the extreme ultraviolet (EUV) emission spectra of gadolinium (Gd) near 6.7 nm. The spectra are shown to have a strong dependence on viewing angle when produced with a laser pulse duration of 10 ns, which may be attributed to absorption by low ion stages of Gd and an angular variation in the ion distribution. Absorption effects are less pronounced at a 150-ps pulse duration due to reduced opacity resulting from plasma expansion. Thus for evaluating source intensity, it is necessary to allow for variation with both viewing angle and target orientation.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
Automated comprehensive Adolescent Idiopathic Scoliosis assessment using MVC-Net.
Wu, Hongbo; Bailey, Chris; Rasoulinejad, Parham; Li, Shuo
2018-05-18
Automated quantitative estimation of spinal curvature is an important task for the ongoing evaluation and treatment planning of Adolescent Idiopathic Scoliosis (AIS). It solves the widely accepted disadvantage of manual Cobb angle measurement (time-consuming and unreliable) which is currently the gold standard for AIS assessment. Attempts have been made to improve the reliability of automated Cobb angle estimation. However, it is very challenging to achieve accurate and robust estimation of Cobb angles due to the need for correctly identifying all the required vertebrae in both Anterior-posterior (AP) and Lateral (LAT) view x-rays. The challenge is especially evident in LAT x-ray where occlusion of vertebrae by the ribcage occurs. We therefore propose a novel Multi-View Correlation Network (MVC-Net) architecture that can provide a fully automated end-to-end framework for spinal curvature estimation in multi-view (both AP and LAT) x-rays. The proposed MVC-Net uses our newly designed multi-view convolution layers to incorporate joint features of multi-view x-rays, which allows the network to mitigate the occlusion problem by utilizing the structural dependencies of the two views. The MVC-Net consists of three closely-linked components: (1) a series of X-modules for joint representation of spinal structure (2) a Spinal Landmark Estimator network for robust spinal landmark estimation, and (3) a Cobb Angle Estimator network for accurate Cobb Angles estimation. By utilizing an iterative multi-task training algorithm to train the Spinal Landmark Estimator and Cobb Angle Estimator in tandem, the MVC-Net leverages the multi-task relationship between landmark and angle estimation to reliably detect all the required vertebrae for accurate Cobb angles estimation. Experimental results on 526 x-ray images from 154 patients show an impressive 4.04° Circular Mean Absolute Error (CMAE) in AP Cobb angle and 4.07° CMAE in LAT Cobb angle estimation, which demonstrates the MVC-Net's capability of robust and accurate estimation of Cobb angles in multi-view x-rays. Our method therefore provides clinicians with a framework for efficient, accurate, and reliable estimation of spinal curvature for comprehensive AIS assessment. Copyright © 2018. Published by Elsevier B.V.
Qin, Zong; Wang, Kai; Chen, Fei; Luo, Xiaobing; Liu, Sheng
2010-08-02
In this research, the condition for uniform lighting generated by array of LEDs with large view angle was studied. The luminous intensity distribution of LED is not monotone decreasing with view angle. A LED with freeform lens was designed as an example for analysis. In a system based on LEDs designed in house with a thickness of 20mm and rectangular arrangement, the condition for uniform lighting was derived and the analytical results demonstrated that the uniformity was not decreasing monotonously with the increasing of LED-to-LED spacing. The illuminance uniformities were calculated with Monte Carlo ray tracing simulations and the uniformity was found to increase with the increasing of certain LED-to-LED spacings anomalously. Another type of large view angle LED and different arrangements were discussed in addition. Both analysis and simulation results showed that the method is available for LED array lighting system design on the basis of large view angle LED..
View angle dependence of cloud optical thicknesses retrieved by MODIS
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Varnai, Tamas
2005-01-01
This study examines whether cloud inhomogeneity influences the view angle dependence of MODIS cloud optical thickness (tau) retrieval results. The degree of cloud inhomogeneity is characterized through the local gradient in 11 microns brightness temperature. The analysis of liquid phase clouds in a one year long global dataset of Collection 4 MODIS data reveals that while optical thickness retrievals give remarkably consistent results for all view directions if clouds are homogeneous, they give much higher tau-values for oblique views than for overhead views if clouds are inhomogeneous and the sun is fairly oblique. For solar zenith angles larger than 55deg, the mean optical thickness retrieved for the most inhomogeneous third of cloudy pixels is more than 30% higher for oblique views than for overhead views. After considering a variety of possible scenarios, the paper concludes that the most likely reason for the increase lies in three-dimensional radiative interactions that are not considered in current, one-dimensional retrieval algorithms. Namely, the radiative effect of cloud sides viewed at oblique angles seems to contribute most to the enhanced tau-values. The results presented here will help understand cloud retrieval uncertainties related to cloud inhomogeneity. They complement the uncertainty estimates that will start accompanying MODIS cloud products in Collection 5 and may eventually help correct for the observed view angle dependent biases.
NASA Astrophysics Data System (ADS)
Castro, José J.; Pozo, Antonio M.; Rubiño, Manuel
2013-11-01
In this work we studied the color dependence with a horizontal-viewing angle and colorimetric characterization of two liquid-crystal displays (LCD) using two different backlighting: Cold Cathode Fluorescent Lamps (CCFLs) and light-emitting diodes (LEDs). The LCDs studied had identical resolution, size, and technology (TFT - thin film transistor). The colorimetric measurements were made with the spectroradiometer SpectraScan PR-650 following the procedure recommended in the European guideline EN 61747-6. For each display, we measured at the centre of the screen the chromaticity coordinates at horizontal viewing angles of 0, 20, 40, 60 and 80 degrees for the achromatic (A), red (R), green (G) and blue (B) channels. Results showed a greater color-gamut area for the display with LED backlight, compared with the CCFL backlight, showing a greater range of colors perceptible by human vision. This color-gamut area diminished with viewing angle for both displays. Higher differences between trends for viewing angles were observed in the LED-backlight, especially for the R- and G-channels, demonstrating a higher variability of the chromaticity coordinates with viewing angle. The best additivity was reached by the LED-backlight display (a lower error percentage). LED-backlight display provided better color performance of visualization.
Colors Of Liquid Crystals Used To Measure Surface Shear Stresses
NASA Technical Reports Server (NTRS)
Reda, D. C.; Muratore, J. J., Jr.
1996-01-01
Developmental method of mapping shear stresses on aerodynamic surfaces involves observation, at multiple viewing angles, of colors of liquid-crystal surface coats illuminated by white light. Report describing method referenced in "Liquid Crystals Indicate Directions Of Surface Shear Stresses" (ARC-13379). Resulting maps of surface shear stresses contain valuable data on magnitudes and directions of skin friction forces associated with surface flows; data used to refine mathematical models of aerodynamics for research and design purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, R.
The spatial autocorrelation functions of broad-band longwave and shortwave radiances measured by the Earth Radiation Budget Experiment (ERBE) are analyzed as a function of view angle in an investigation of the general effects of scene inhomogeneity on radiation. For nadir views, the correlation distance of the autocorrelation function is about 900 km for longwave radiance and about 500 km for shortwave radiance, consistent with higher degrees of freedom in shortwave reflection. Both functions rise monotonically with view angle, but there is a substantial difference in the relative angular dependence of the shortwave and longwave functions, especially for view angles lessmore » than 50 deg. In this range, the increase with angle of the longwave functions is found to depend only on the expansion of pixel area with angle, whereas the shortwave functions show an additional dependence on angle that is attributed to the occlusion of inhomogeneities by cloud height variations. Beyond a view angle of about 50 deg, both longwave and shortwave functions appear to be affected by cloud sides. The shortwave autocorrelation functions do not satisfy the principle of directional reciprocity, thereby proving that the average scene is horizontally inhomogeneous over the scale of an ERBE pixel (1500 sq km). Coarse stratification of the measurements by cloud amount, however, indicates that the average cloud-free scene does satisfy directional reciprocity on this scale.« less
Virtual viewpoint generation for three-dimensional display based on the compressive light field
NASA Astrophysics Data System (ADS)
Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan
2016-10-01
Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
What convention is used for the illumination and view angles?
Atmospheric Science Data Center
2014-12-08
... Azimuth angles are measured clockwise from the direction of travel to local north. For both the Sun and cameras, azimuth describes the ... to the equator, because of its morning equator crossing time. Additionally, the difference in view and solar azimuth angle will be near ...
Digital 3D holographic display using scattering layers for enhanced viewing angle and image size
NASA Astrophysics Data System (ADS)
Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun
2017-05-01
In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.
Wide-angle vision for road views
NASA Astrophysics Data System (ADS)
Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.
2013-03-01
The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
The impact of acquisition angle differences on three-dimensional quantitative coronary angiography.
Tu, Shengxian; Holm, Niels R; Koning, Gerhard; Maeng, Michael; Reiber, Johan H C
2011-08-01
Three-dimensional (3D) quantitative coronary angiography (QCA) requires two angiographic views to restore vessel dimensions. This study investigated the impact of acquisition angle differences (AADs) of the two angiographic views on the assessed dimensions by 3D QCA. X-ray angiograms of an assembled brass phantom with different types of straight lesions were recorded at multiple angiographic projections. The projections were randomly matched as pairs and 3D QCA was performed in those pairs with AAD larger than 25°. The lesion length and diameter stenosis in three different lesions, a circular concentric severe lesion (A), a circular concentric moderate lesion (B), and a circular eccentric moderate lesion (C), were measured by 3D QCA. The acquisition protocol was repeated for a silicone bifurcation phantom, and the bifurcation angles and bifurcation core volume were measured by 3D QCA. The measurements were compared with the true dimensions if applicable and their correlation with AAD was studied. 50 matched pairs of angiographic views were analyzed for the brass phantom. The average value of AAD was 48.0 ± 14.1°. The percent diameter stenosis was slightly overestimated by 3D QCA for all lesions: A (error 1.2 ± 0.9%, P < 0.001); B (error 0.6 ± 0.5%, P < 0.001); C (error 1.1 ± 0.6%, P < 0.001). The correlation of the measurements with AAD was only significant for lesion A (R(2) = 0.151, P = 0.005). The lesion length was slightly overestimated by 3D QCA for lesion A (error 0.06 ± 0.18 mm, P = 0.026), but well assessed for lesion B (error -0.00 ± 0.16 mm, P = 0.950) and lesion C (error -0.01 ± 0.18 mm, P = 0.585). The correlation of the measurements with AAD was not significant for any lesion. Forty matched pairs of angiographic views were analyzed for the bifurcation phantom. The average value of AAD was 49.1 ± 15.4°. 3D QCA slightly overestimated the proximal angle (error 0.4 ± 1.1°, P = 0.046) and the distal angle (error 1.5 ± 1.3°, P < 0.001). The correlation with AAD was only significant for the distal angle (R(2) = 0.256, P = 0.001). The correlation of bifurcation core volume measurements with AAD was not significant (P = 0.750). Of the two aforementioned measurements with significant correlation with AAD, the errors tended to increase as AAD became larger. 3D QCA can be used to reliably assess vessel dimensions and bifurcation angles. Increasing the AAD of the two angiographic views does not increase accuracy and precision of 3D QCA for circular lesions or bifurcation dimensions. Copyright © 2011 Wiley-Liss, Inc.
NASA MISR Studies Smoke Plumes from California Sand Fire
2016-08-02
39,000 acres (60 square miles, or 160 square kilometers). Thousands of residents were evacuated, and the fire claimed the life of one person. The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite passed over the region on July 23 around 11:50 a.m. PDT. At left is an image acquired by MISR's 60-degree forward-viewing camera. The oblique view angle makes the smoke more apparent than it would be in a more conventional vertical view. This cropped image is about 185 miles (300 kilometers) wide. Smoke from the Sand Fire is visible on the right-hand side of the image. Stereoscopic analysis of MISR's multiple camera angles is used to compute the height of the smoke plume from the Sand Fire. In the right-hand image, these heights are superimposed on the underlying image. The color scale shows that the plume extends up to about 4 miles (6 kilometers) above its source in Santa Clarita, but rapidly diminishes in height as winds push it to the southwest. The data compare well with a pilot report issued at Los Angeles International Airport on the evening of July 22, which reported smoke at 15,000-18,000 feet altitude (4.5 to 5.5 kilometers). Air quality warnings were issued for the San Fernando Valley and the western portion of Los Angeles due to this low-hanging smoke. However, data from air quality monitoring instruments seem to indicate that the smoke did not actually reach the ground. These data were captured during Terra orbit 88284. http://photojournal.jpl.nasa.gov/catalog/PIA20724
10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...
10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q; Snyder, K; Liu, C
Purpose: To develop an optimization algorithm to reduce normal brain dose by optimizing couch and collimator angles for single isocenter multiple targets treatment of stereotactic radiosurgery. Methods: Three metastatic brain lesions were retrospectively planned using single-isocenter volumetric modulated arc therapy (VMAT). Three matrices were developed to calculate the projection of each lesion on Beam’s Eye View (BEV) by the rotating couch, collimator and gantry respectively. The island blocking problem was addressed by computing the total area of open space between any two lesions with shared MLC leaf pairs. The couch and collimator angles resulting in the smallest open areas weremore » the optimized angles for each treatment arc. Two treatment plans with and without couch and collimator angle optimization were developed using the same objective functions and to achieve 99% of each target volume receiving full prescription dose of 18Gy. Plan quality was evaluated by calculating each target’s Conformity Index (CI), Gradient Index (GI), and Homogeneity index (HI), and absolute volume of normal brain V8Gy, V10Gy, V12Gy, and V14Gy. Results: Using the new couch/collimator optimization strategy, dose to normal brain tissue was reduced substantially. V8, V10, V12, and V14 decreased by 2.3%, 3.6%, 3.5%, and 6%, respectively. There were no significant differences in the conformity index, gradient index, and homogeneity index between two treatment plans with and without the new optimization algorithm. Conclusion: We have developed a solution to the island blocking problem in delivering radiation to multiple brain metastases with shared isocenter. Significant reduction in dose to normal brain was achieved by using optimal couch and collimator angles that minimize total area of open space between any of the two lesions with shared MLC leaf pairs. This technique has been integrated into Eclipse treatment system using scripting API.« less
Two Perspectives on Forest Fire
NASA Technical Reports Server (NTRS)
2002-01-01
Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.
NASA Technical Reports Server (NTRS)
Davies, Roger
1994-01-01
The spatial autocorrelation functions of broad-band longwave and shortwave radiances measured by the Earth Radiation Budget Experiment (ERBE) are analyzed as a function of view angle in an investigation of the general effects of scene inhomogeneity on radiation. For nadir views, the correlation distance of the autocorrelation function is about 900 km for longwave radiance and about 500 km for shortwave radiance, consistent with higher degrees of freedom in shortwave reflection. Both functions rise monotonically with view angle, but there is a substantial difference in the relative angular dependence of the shortwave and longwave functions, especially for view angles less than 50 deg. In this range, the increase with angle of the longwave functions is found to depend only on the expansion of pixel area with angle, whereas the shortwave functions show an additional dependence on angle that is attributed to the occlusion of inhomogeneities by cloud height variations. Beyond a view angle of about 50 deg, both longwave and shortwave functions appear to be affected by cloud sides. The shortwave autocorrelation functions do not satisfy the principle of directional reciprocity, thereby proving that the average scene is horizontally inhomogeneous over the scale of an ERBE pixel (1500 sq km). Coarse stratification of the measurements by cloud amount, however, indicates that the average cloud-free scene does satisfy directional reciprocity on this scale.
Application of AI techniques to infer vegetation characteristics from directional reflectance(s)
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Smith, J. A.; Harrison, P. A.; Harrison, P. R.
1994-01-01
Traditionally, the remote sensing community has relied totally on spectral knowledge to extract vegetation characteristics. However, there are other knowledge bases (KB's) that can be used to significantly improve the accuracy and robustness of inference techniques. Using AI (artificial intelligence) techniques a KB system (VEG) was developed that integrates input spectral measurements with diverse KB's. These KB's consist of data sets of directional reflectance measurements, knowledge from literature, and knowledge from experts which are combined into an intelligent and efficient system for making vegetation inferences. VEG accepts spectral data of an unknown target as input, determines the best techniques for inferring the desired vegetation characteristic(s), applies the techniques to the target data, and provides a rigorous estimate of the accuracy of the inference. VEG was developed to: infer spectral hemispherical reflectance from any combination of nadir and/or off-nadir view angles; infer percent ground cover from any combination of nadir and/or off-nadir view angles; infer unknown view angle(s) from known view angle(s) (known as view angle extension); and discriminate between user defined vegetation classes using spectral and directional reflectance relationships developed from an automated learning algorithm. The errors for these techniques were generally very good ranging between 2 to 15% (proportional root mean square). The system is designed to aid scientists in developing, testing, and applying new inference techniques using directional reflectance data.
1999-08-24
One wide-angle and eight narrow-angle camera images of Miranda, taken by NASA Voyager 2, were combined in this view. The controlled mosaic was transformed to an orthographic view centered on the south pole.
An explicit canopy BRDF model and inversion. [Bidirectional Reflectance Distribution Function
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1992-01-01
Based on a rigorous canopy radiative transfer equation, the multiple scattering radiance is approximated by the asymptotic theory, and the single scattering radiance calculation, which requires an numerical intergration due to considering the hotspot effect, is simplified. A new formulation is presented to obtain more exact angular dependence of the sky radiance distribution. The unscattered solar radiance and single scattering radiance are calculated exactly, and the multiple scattering is approximated by the delta two-stream atmospheric radiative transfer model. The numerical algorithms prove that the parametric canopy model is very accurate, especially when the viewing angles are smaller than 55 deg. The Powell algorithm is used to retrieve biospheric parameters from the ground measured multiangle observations.
NASA Astrophysics Data System (ADS)
Sadat, Mojtaba T.; Viti, Francesco
2015-02-01
Machine vision is rapidly gaining popularity in the field of Intelligent Transportation Systems. In particular, advantages are foreseen by the exploitation of Aerial Vehicles (AV) in delivering a superior view on traffic phenomena. However, vibration on AVs makes it difficult to extract moving objects on the ground. To partly overcome this issue, image stabilization/registration procedures are adopted to correct and stitch multiple frames taken of the same scene but from different positions, angles, or sensors. In this study, we examine the impact of multiple feature-based techniques for stabilization, and we show that SURF detector outperforms the others in terms of time efficiency and output similarity.
Measuring the Viewing Angle of GW170817 with Electromagnetic and Gravitational Waves
NASA Astrophysics Data System (ADS)
Finstad, Daniel; De, Soumi; Brown, Duncan A.; Berger, Edo; Biwer, Christopher M.
2018-06-01
The joint detection of gravitational waves (GWs) and electromagnetic (EM) radiation from the binary neutron star merger GW170817 ushered in a new era of multi-messenger astronomy. Joint GW–EM observations can be used to measure the parameters of the binary with better precision than either observation alone. Here, we use joint GW–EM observations to measure the viewing angle of GW170817, the angle between the binary’s angular momentum and the line of sight. We combine a direct measurement of the distance to the host galaxy of GW170817 (NGC 4993) of 40.7 ± 2.36 Mpc with the Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo GW data and find that the viewing angle is {32}-13+10 +/- 1.7 degrees (90% confidence, statistical, and systematic errors). We place a conservative lower limit on the viewing angle of ≥13°, which is robust to the choice of prior. This measurement provides a constraint on models of the prompt γ-ray and radio/X-ray afterglow emission associated with the merger; for example, it is consistent with the off-axis viewing angle inferred for a structured jet model. We provide for the first time the full posterior samples from Bayesian parameter estimation of LIGO/Virgo data to enable further analysis by the community.
3D medical thermography device
NASA Astrophysics Data System (ADS)
Moghadam, Peyman
2015-05-01
In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.
Design considerations for a backlight with switchable viewing angles
NASA Astrophysics Data System (ADS)
Fujieda, Ichiro; Takagi, Yoshihiko; Rahadian, Fanny
2006-08-01
Small-sized liquid crystal displays are widely used for mobile applications such as cell phones. Electronic control of a viewing angle range is desired in order to maintain privacy for viewing in public as well as to provide wide viewing angles for solitary viewing. Conventionally, a polymer-dispersed liquid crystal (PDLC) panel is inserted between a backlight and a liquid crystal panel. The PDLC layer either transmits or scatters the light from the backlight, thus providing an electronic control of viewing angles. However, such a display system is obviously thick and expensive. Here, we propose to place an electronically-controlled, light-deflecting device between an LED and a light-guide of a backlight. For example, a liquid crystal lens is investigated for other applications and its focal length is controlled electronically. A liquid crystal phase grating either transmits or diffracts an incoming light depending on whether or not a periodic phase distribution is formed inside its liquid crystal layer. A bias applied to such a device will control the angular distribution of the light propagating inside a light-guide. Output couplers built in the light-guide extract the propagating light to outside. They can be V-shaped grooves, pyramids, or any other structures that can refract, reflect or diffract light. When any of such interactions occur, the output couplers translate the changes in the propagation angles into the angular distribution of the output light. Hence the viewing-angle characteristic can be switched. The designs of the output couplers and the LC devices are important for such a backlight system.
A Design of Experiments Investigation of Offset Streams for Supersonic Jet Noise Reduction
NASA Technical Reports Server (NTRS)
Henderson, Brenda; Papamoschou, Dimitri
2014-01-01
An experimental investigation into the noise characteristics of a dual-stream jet with four airfoils inserted in the fan nozzle was conducted. The intent of the airfoils was to deflect the fan stream relative to the core stream and, therefore, impact the development of the secondary potential core and noise radiated in the peak jet-noise direction. The experiments used a full-factorial Design of Experiments (DoE) approach to identify parameters and parameter interactions impacting noise radiation at two azimuthal microphone array locations, one of which represented a sideline viewing angle. The parameters studied included airfoil angle-of-attack, airfoil azimuthal location within the fan nozzle, and airfoil axial location relative to the fan-nozzle trailing edge. Jet conditions included subsonic and supersonic fan-stream Mach numbers. Heated jets conditions were simulated with a mixture of helium and air to replicate the exhaust velocity and density of the hot jets. The introduction of the airfoils was shown to impact noise radiated at polar angles in peak-jet noise direction and to have no impact on noise radiated at small and broadside polar angles and to have no impact on broadband-shock-associated noise. The DoE analysis showed the main effects impacting noise radiation at sideline-azimuthal-viewing angles included airfoil azimuthal angle for the airfoils on the lower side of the jet near the sideline array and airfoil trailing edge distance (with airfoils located at the nozzle trailing edge produced the lowest sound pressure levels). For an array located directly beneath the jet (and on the side of the jet from which the fan stream was deflected), the main effects impacting noise radiation included airfoil angle-of-attack and airfoil azimuthal angle for the airfoils located on the observation side of the jet as well and trailing edge distance. Interaction terms between multiple configuration parameters were shown to have significant impact on the radiated noise. The models were shown to adequately describe the sound-pressure levels obtained for a configuration in the center of the design space indicating the models can be used to navigate the design space.
Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José
2015-06-04
In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass production, with several important advantages: low cost, low power needs and a high frame rate (frames per second) when dynamic measurements are required.
Crew Earth Observations (CEO) by Expedition Five Crew
2002-06-18
ISS005-E-5416 (18 June 2002) --- This photograph, taken by the International Space Stations Expedition Five crew on June 18, 2002, shows the Hayman Fire burning in the foothills southwest of Denver. Astronauts use a variety of lenses and look angles as their orbits pass over wildfires to document the long-distance movements of smoke from the fires as well as details of the burning areas. In this detail view, you can see multiple smoke source points as the fire moves across the rough terrain. The link [ ] was provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth [link to ].
GLRS-R 2-colour retroreflector target design and predicted performance
NASA Astrophysics Data System (ADS)
Lund, Glenn
The retroreflector ground target design for the GLRS-R spaceborne dual wavelength laser ranging system is described. The passive design flows down from the requirements of high station autonomy, high global field of view, little or no multiple pulse returns, and adequate optical cross section for most ranging geometries. The solution makes use of five hollow cube corner retroreflectors of which one points to the zenith and the remaining four are inclined from the vertical at uniform azimuthal spacings. The need for large retroreflectors is expected to generate narrow diffraction lobes. A good compromise solution is found by spoiling just one of the retroereflector dihedral angles from 90 deg, thus generating two symmetrically oriented diffraction lobes in the return beam. The required spoil angles are found to have little dependance on ground target latitude. Various link budget analyses are presented. They show the influence of such factors as point ahead optimization, turbulence, ranging angle, atmospheric visibility, and ground target thermal deformations.
A see-through holographic head-mounted display with the large viewing angle
NASA Astrophysics Data System (ADS)
Chen, Zhidong; sang, Xinzhu; Lin, Qiaojun; Li, Jin; Yu, Xunbo; Gao, Xin; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu; Xie, Songlin
2017-02-01
A novel solution for the large view angle holographic head-mounted display (HHMD) is presented. Divergent light is used for the hologram illumination to construct a large size three-dimensional object outside the display in a short distance. A designed project-type lens with large numerical aperture projects the object constructed by the hologram to its real location. The presented solution can realize a compact HHMD system with a large field of view. The basic principle and the structure of the system are described. An augmented reality (AR) prototype with the size of 50 mm×40 mm and the view angle above 60° is demonstrated.
Toward a 3D video format for auto-stereoscopic displays
NASA Astrophysics Data System (ADS)
Vetro, Anthony; Yea, Sehoon; Smolic, Aljoscha
2008-08-01
There has been increased momentum recently in the production of 3D content for cinema applications; for the most part, this has been limited to stereo content. There are also a variety of display technologies on the market that support 3DTV, each offering a different viewing experience and having different input requirements. More specifically, stereoscopic displays support stereo content and require glasses, while auto-stereoscopic displays avoid the need for glasses by rendering view-dependent stereo pairs for a multitude of viewing angles. To realize high quality auto-stereoscopic displays, multiple views of the video must either be provided as input to the display, or these views must be created locally at the display. The former approach has difficulties in that the production environment is typically limited to stereo, and transmission bandwidth for a large number of views is not likely to be available. This paper discusses an emerging 3D data format that enables the latter approach to be realized. A new framework for efficiently representing a 3D scene and enabling the reconstruction of an arbitrarily large number of views prior to rendering is introduced. Several design challenges are also highlighted through experimental results.
Visual Costs of the Inhomogeneity of Luminance and Contrast by Viewing LCD-TFT Screens Off-Axis.
Ziefle, Martina; Groeger, Thomas; Sommer, Dietmar
2003-01-01
In this study the anisotropic characteristics of TFT-LCD (Thin-Film-Transistor-Liquid Crystal Display) screens were examined. Anisotropy occurs as the distribution of luminance and contrast changes over the screen surface due to different viewing angles. On the basis of detailed photometric measurements the detection performance in a visual reaction task was measured in different viewing conditions. Viewing angle (0 degrees, frontal view; 30 degrees, off-axis; 50 degrees, off-axis) as well as ambient lighting (a dark or illuminated room) were varied. Reaction times and accuracy of detection performance were recorded. Results showed TFT's anisotropy to be a crucial factor deteriorating performance. With an increasing viewing angle performance decreased. It is concluded that TFT's anisotropy is a limiting factor for overall suitability and usefulness of this new display technology.
Preferred viewing distance of liquid crystal high-definition television.
Lee, Der-Song
2012-01-01
This study explored the effect of TV size, illumination, and viewing angle on preferred viewing distance in high-definition liquid crystal display televisions (HDTV). Results showed that the mean preferred viewing distance was 2856 mm. TV size and illumination significantly affected preferred viewing distance. The larger the screen size, the greater the preferred viewing distance, at around 3-4 times the width of the screen (W). The greater the illumination, the greater the preferred viewing distance. Viewing angle also correlated significantly with preferred viewing distance. The more deflected from direct frontal view, the shorter the preferred viewing distance seemed to be. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Advanced Image Processing for NASA Applications
NASA Technical Reports Server (NTRS)
LeMoign, Jacqueline
2007-01-01
The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.
Examining view angle effects on leaf N estimation in wheat using field reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Song, Xiao; Feng, Wei; He, Li; Xu, Duanyang; Zhang, Hai-Yan; Li, Xiao; Wang, Zhi-Jie; Coburn, Craig A.; Wang, Chen-Yang; Guo, Tian-Cai
2016-12-01
Real-time, nondestructive monitoring of crop nitrogen (N) status is a critical factor for precision N management during wheat production. Over a 3-year period, we analyzed different wheat cultivars grown under different experimental conditions in China and Canada and studied the effects of viewing angle on the relationships between various vegetation indices (VIs) and leaf nitrogen concentration (LNC) using hyperspectral data from 11 field experiments. The objective was to improve the prediction accuracy by minimizing the effects of viewing angle on LNC estimation to construct a novel vegetation index (VI) for use under different experimental conditions. We examined the stability of previously reported optimum VIs obtained from 13 traditional indices for estimating LNC at 13 viewing zenith angles (VZAs) in the solar principal plane (SPP). Backscattering direction showed better index performance than forward scattering direction. Red-edge VIs including modified normalized difference vegetation index (mND705), ratio index within the red edge region (RI-1dB) and normalized difference red edge index (NDRE) were highly correlated with LNC, as confirmed by high R2 determination coefficients. However, these common VIs tended to saturation, as the relationships strongly depended on experimental conditions. To overcome the influence of VZA on VIs, the chlorophyll- and LNC-sensitive NDRE index was divided by the floating-position water band index (FWBI) to generate the integrated narrow-band vegetation index. The highest correlation between the novel NDRE/FWBI parameter and LNC (R2 = 0.852) occurred at -10°, while the lowest correlation (R2 = 0.745) occurred at 60°. NDRE/FWBI was more highly correlated with LNC than existing commonly used VIs at an identical viewing zenith angle. Upon further analysis of angle combinations, our novel VI exhibited the best performance, with the best prediction accuracy at 0° to -20° (R2 = 0.838, RMSE = 0.360) and relatively good accuracy at 0° to -30° (R2 = 0.835, RMSE = 0.366). As it is possible to monitor plant N status over a wide range of angles using portable spectrometers, viewing angles of as much as 0° to -30° are common. Consequently, we developed a united model across angles of 0° to -30° to reduce the effects of viewing angle on LNC prediction in wheat. The proposed combined NDRE/FWBI parameter, designated the wide-angle-adaptability nitrogen index (WANI), is superior for estimating LNC in wheat on a regional scale in China and Canada.
Observing System Simulations for Small Satellite Formations Estimating Bidirectional Reflectance
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Gatebe, Charles K.; de Weck, Olivier
2015-01-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: Use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Observing system simulations for small satellite formations estimating bidirectional reflectance
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Weck, Olivier de
2015-12-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Lee, Ji-Hoon; Lee, Jung Jin; Lim, Young Jin; Kundu, Sudarshan; Kang, Shin-Woong; Lee, Seung Hee
2013-11-04
Long standing electro-optic problems of a polymer-dispersed liquid crystal (PDLC) such as low contrast ratio and transmittances decrease in oblique viewing angle have been challenged with a mixture of dual frequency liquid crystal (DFLC) and reactive mesogen (RM). The DFLC and RM molecules were vertically aligned and then photo-polymerized using a UV light. At scattering state under 50 kHz electric field, DFLC was switched to planar state, giving greater extraordinary refractive index than the normal PDLC cell. Consequently, the scattering intensity and the contrast ratio were increased compared to the conventional PDLC cell. At transparent state under 1 kHz electric field, the extraordinary refractive index of DFLC was simultaneously matched with the refractive index of vertically aligned RM so that the light scattering in oblique viewing angles was minimized, giving rise to high transmittance in all viewing angles.
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Vanderbilt, V. C.; Robinson, B. F.; Biehl, L. L.; Vanderbilt, A. S.
1981-01-01
The reflectance response with view angle of wheat, was analyzed. The analyses, which assumes there are no atmospheric effects, and otherwise simulates the response of a multispectral scanner, is based upon spectra taken continuously in wavelength from 0.45 to 2.4 micrometers at more than 1200 view/illumination directions using an Exotech model 20C spectra radiometer. Data were acquired six meters above four wheat canopies, each at a different growth stage. The analysis shows that the canopy reflective response is a pronounced function of illumination angle, scanner view angle and wavelength. The variation is greater at low solar elevations compared to high solar elevations.
Wu, Qixue; Snyder, Karen Chin; Liu, Chang; Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Wen, Ning
2016-09-30
Treatment of patients with multiple brain metastases using a single-isocenter volumetric modulated arc therapy (VMAT) has been shown to decrease treatment time with the tradeoff of larger low dose to the normal brain tissue. We have developed an efficient Projection Summing Optimization Algorithm to optimize the treatment geometry in order to reduce dose to normal brain tissue for radiosurgery of multiple metastases with single-isocenter VMAT. The algorithm: (a) measures coordinates of outer boundary points of each lesion to be treated using the Eclipse Scripting Application Programming Interface, (b) determines the rotations of couch, collimator, and gantry using three matrices about the cardinal axes, (c) projects the outer boundary points of the lesion on to Beam Eye View projection plane, (d) optimizes couch and collimator angles by selecting the least total unblocked area for each specific treatment arc, and (e) generates a treatment plan with the optimized angles. The results showed significant reduction in the mean dose and low dose volume to normal brain, while maintaining the similar treatment plan qualities on the thirteen patients treated previously. The algorithm has the flexibility with regard to the beam arrangements and can be integrated in the treatment planning system for clinical application directly.
Normalization of multidirectional red and NIR reflectances with the SAVI
NASA Technical Reports Server (NTRS)
Huete, A. R.; Hua, G.; Qi, J.; Chehbouni, A.; Van Leeuwen, W. J. D.
1992-01-01
Directional reflectance measurements were made over a semi-desert gramma grassland at various times of the growing season. View angle measurements from +40 to -40 degrees were made at various solar zenith angles and soil moisture conditions. The sensitivity of the Normalized Difference Vegetation Index (NDVI) and the Soil Adjusted Vegetation Index (SAVI) to bidirectional measurements was assessed for purposes of improving remote temporal monitoring of vegetation dynamics. The SAVI view angle response was found to be symmetric about nadir while the NDVI response was strongly anisotropic. This enabled the view angle behavior of the SAVI to be normalized with a cosine function. In contrast to the NDVI, the SAVI was able to minimize soil moisture and shadow influences for all measurement conditions.
Effects of changing canopy directional reflectance on feature selection
NASA Technical Reports Server (NTRS)
Smith, J. A.; Oliver, R. E.; Kilpela, O. E.
1973-01-01
The use of a Monte Carlo model for generating sample directional reflectance data for two simplified target canopies at two different solar positions is reported. Successive iterations through the model permit the calculation of a mean vector and covariance matrix for canopy reflectance for varied sensor view angles. These data may then be used to calculate the divergence between the target distributions for various wavelength combinations and for these view angles. Results of a feature selection analysis indicate that different sets of wavelengths are optimum for target discrimination depending on sensor view angle and that the targets may be more easily discriminated for some scan angles than others. The time-varying behavior of these results is also pointed out.
Bidirectional Reflectance Functions for Application to Earth Radiation Budget Studies
NASA Technical Reports Server (NTRS)
Manalo-Smith, N.; Tiwari, S. N.; Smith, G. L.
1997-01-01
Reflected solar radiative fluxes emerging for the Earth's top of the atmosphere are inferred from satellite broadband radiance measurements by applying bidirectional reflectance functions (BDRFs) to account for the anisotropy of the radiation field. BDRF's are dependent upon the viewing geometry (i.e. solar zenith angle, view zenith angle, and relative azimuth angle), the amount and type of cloud cover, the condition of the intervening atmosphere, and the reflectance characteristics of the underlying surface. A set of operational Earth Radiation Budget Experiment (ERBE) BDRFs is available which was developed from the Nimbus 7 ERB (Earth Radiation Budget) scanner data for a three-angle grid system, An improved set of bidirectional reflectance is required for mission planning and data analysis of future earth radiation budget instruments, such as the Clouds and Earth's Radiant Energy System (CERES), and for the enhancement of existing radiation budget data products. This study presents an analytic expression for BDRFs formulated by applying a fit to the ERBE operational model tabulations. A set of model coefficients applicable to any viewing condition is computed for an overcast and a clear sky scene over four geographical surface types: ocean, land, snow, and desert, and partly cloudy scenes over ocean and land. The models are smooth in terms of the directional angles and adhere to the principle of reciprocity, i.e., they are invariant with respect to the interchange of the incoming and outgoing directional angles. The analytic BDRFs and the radiance standard deviations are compared with the operational ERBE models and validated with ERBE data. The clear ocean model is validated with Dlhopolsky's clear ocean model. Dlhopolsky developed a BDRF of higher angular resolution for clear sky ocean from ERBE radiances. Additionally, the effectiveness of the models accounting for anisotropy for various viewing directions is tested with the ERBE along tract data. An area viewed from nadir and from the side give two different radiance measurements but should yield the same flux when converted by the BDRF. The analytic BDRFs are in very good qualitative agreement with the ERBE models. The overcast scenes exhibit constant retrieved albedo over viewing zenith angles for solar zenith angles less than 60 degrees. The clear ocean model does not produce constant retrieved albedo over viewing zenith angles but gives an improvement over the ERBE operational clear sky ocean BDRF.
THE VIEWING ANGLES OF BROAD ABSORPTION LINE VERSUS UNABSORBED QUASARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiPompeo, M. A.; Brotherton, M. S.; De Breuck, C.
2012-06-10
It was recently shown that there is a significant difference in the radio spectral index distributions of broad absorption line (BAL) quasars and unabsorbed quasars, with an overabundance of BAL quasars with steeper radio spectra. This result suggests that source orientation does play into the presence or absence of BAL features. In this paper, we provide more quantitative analysis of this result based on Monte Carlo simulations. While the relationship between viewing angle and spectral index does indeed contain a lot of scatter, the spectral index distributions are different enough to overcome that intrinsic variation. Utilizing two different models ofmore » the relationship between spectral index and viewing angle, the simulations indicate that the difference in spectral index distributions can be explained by allowing BAL quasar viewing angles to extend about 10 Degree-Sign farther from the radio jet axis than non-BAL sources, though both can be seen at small angles. These results show that orientation cannot be the only factor determining whether BAL features are present, but it does play a role.« less
Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography
NASA Astrophysics Data System (ADS)
Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori
2014-02-01
In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang
2013-01-01
As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
Combination of CT scanning and fluoroscopy imaging on a flat-panel CT scanner
NASA Astrophysics Data System (ADS)
Grasruck, M.; Gupta, R.; Reichardt, B.; Suess, Ch.; Schmidt, B.; Stierstorfer, K.; Popescu, S.; Brady, T.; Flohr, T.
2006-03-01
We developed and evaluated a prototype flat-panel detector based Volume CT (fpVCT) scanner. The fpVCT scanner consists of a Varian 4030CB a-Si flat-panel detector mounted in a multi slice CT-gantry (Siemens Medical Solutions). It provides a 25 cm field of view with 18 cm z-coverage at the isocenter. In addition to the standard tomographic scanning, fpVCT allows two new scan modes: (1) fluoroscopic imaging from any arbitrary rotation angle, and (2) continuous, time-resolved tomographic scanning of a dynamically changing viewing volume. Fluoroscopic imaging is feasible by modifying the standard CT gantry so that the imaging chain can be oriented along any user-selected rotation angle. Scanning with a stationary gantry, after it has been oriented, is equivalent to a conventional fluoroscopic examination. This scan mode enables combined use of high-resolution tomography and real-time fluoroscopy with a clinically usable field of view in the z direction. The second scan mode allows continuous observation of a timeevolving process such as perfusion. The gantry can be continuously rotated for up to 80 sec, with the rotation time ranging from 3 to 20 sec, to gather projection images of a dynamic process. The projection data, that provides a temporal log of the viewing volume, is then converted into multiple image stacks that capture the temporal evolution of a dynamic process. Studies using phantoms, ex vivo specimens, and live animals have confirmed that these new scanning modes are clinically usable and offer a unique view of the anatomy and physiology that heretofore has not been feasible using static CT scanning. At the current level of image quality and temporal resolution, several clinical applications such a dynamic angiography, tumor enhancement pattern and vascularity studies, organ perfusion, and interventional applications are in reach.
Learning class descriptions from a data base of spectral reflectance with multiple view angles
NASA Technical Reports Server (NTRS)
Kimes, Daniel S.; Harrison, Patrick R.; Harrison, P. A.
1992-01-01
A learning program has been developed which combines 'learning by example' with the generate-and-test paradigm to furnish a robust learning environment capable of handling error-prone data. The problem is shown to be capable of learning class descriptions from positive and negative training examples of spectral and directional reflectance data taken from soil and vegetation. The program, which used AI techniques to automate very tedious processes, found the sequence of relationships that contained the most important information which could distinguish the classes.
Practical system for generating digital mixed reality video holograms.
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
2016-07-10
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
Design of the compact high-resolution imaging spectrometer (CHRIS), and future developments
NASA Astrophysics Data System (ADS)
Cutter, Mike; Lobb, Dan
2017-11-01
The CHRIS instrument was launched on ESA's PROBA platform in October 2001, and is providing hyperspectral images of selected ground areas at 17m ground sampling distance, in the spectral range 415nm to 1050nm. Platform agility allows image sets to be taken at multiple view angles in each overpass. The design of the instrument is briefly outlined, including design of optics, structures, detection and in-flight calibration system. Lessons learnt from construction and operation of the experimental system, and possible design directions for future hyperspectral systems, are discussed.
Multiple incidence angle SIR-B experiment over Argentina
NASA Technical Reports Server (NTRS)
Cimino, Jobea; Casey, Daren; Wall, Stephen; Brandani, Aldo; Domik, Gitta; Leberl, Franz
1986-01-01
The Shuttle Imaging Radar (SIR-B), the second synthetic aperture radar (SAR) to fly aboard a shuttle, was launched on October 5, 1984. One of the primary goals of the SIR-B experiment was to use multiple incidence angle radar images to distinguish different terrain types through the use of their characteristic backscatter curves. This goal was accomplished in several locations including the Chubut Province of southern Argentina. Four descending image acquisitions were collected providing a multiple incidence angle image set. The data were first used to assess stereo-radargrammetric techniques. A digital elevation model was produced using the optimum pair of multiple incidence angle images. This model was then used to determine the local incidence angle of each picture element to generate curves of relative brightness vs. incidence angle. Secondary image products were also generated using the multi-angle data. The results of this work indicate that: (1) various forest species and various structures of a single species may be discriminated using multiple incidence angle radar imagery, and (2) it is essential to consider the variation in backscatter due to a variable incidence angle when analyzing and comparing data collected at varying frequencies and polarizations.
4. Elevation view of Bunker 104 with ultrawide angle lens ...
4. Elevation view of Bunker 104 with ultrawide angle lens shows about 70 percent of east facade including entire south end with steps and doors. View shows slope of south end and vegetation growing atop building. See also photo WA-203-C-3. - Puget Sound Naval Shipyard, Munitions Storage Bunker, Naval Ammunitions Depot, South of Campbell Trail, Bremerton, Kitsap County, WA
What is MISR? MISR Instrument? MISR Project?
Atmospheric Science Data Center
2014-12-08
... to improve our understanding of the Earth's environment and climate. Viewing the sunlit Earth simultaneously at nine widely-spaced angles, ... types of atmospheric particles and clouds on climate. The change in reflection at different view angles affords the means to distinguish ...
Friedrich, D T; Sommer, F; Scheithauer, M O; Greve, J; Hoffmann, T K; Schuler, P J
2017-12-01
Objective Advanced transnasal sinus and skull base surgery remains a challenging discipline for head and neck surgeons. Restricted access and space for instrumentation can impede advanced interventions. Thus, we present the combination of an innovative robotic endoscope guidance system and a specific endoscope with adjustable viewing angle to facilitate transnasal surgery in a human cadaver model. Materials and Methods The applicability of the robotic endoscope guidance system with custom foot pedal controller was tested for advanced transnasal surgery on a fresh frozen human cadaver head. Visualization was enabled using a commercially available endoscope with adjustable viewing angle (15-90 degrees). Results Visualization and instrumentation of all paranasal sinuses, including the anterior and middle skull base, were feasible with the presented setup. Controlling the robotic endoscope guidance system was effectively precise, and the adjustable endoscope lens extended the view in the surgical field without the common change of fixed viewing angle endoscopes. Conclusion The combination of a robotic endoscope guidance system and an advanced endoscope with adjustable viewing angle enables bimanual surgery in transnasal interventions of the paranasal sinuses and the anterior skull base in a human cadaver model. The adjustable lens allows for the abandonment of fixed-angle endoscopes, saving time and resources, without reducing the quality of imaging.
MISR Decadal Observations of Mineral Dust: Property Characterization and Climate Applications
NASA Technical Reports Server (NTRS)
Kalashnikova, Olga V.; Garay, Michael J.; Sokolik, Irina; Kahn, Ralph A.; Lyapustin, A.; Diner, David J.; Lee, Jae N.; Torres, Omar; Leptoukh, Gregory G.; Sabbah, Ismail
2012-01-01
The Multi-angle Imaging SpectroRadiometer (MISR) provides a unique, independent source of data for studying dust emission and transport. MISR's multiple view angles allow the retrieval of aerosol properties over bright surfaces, and such retrievals have been shown to be sensitive to the non-sphericity of dust aerosols over both land and water. MISR stereographic views of thick aerosol plumes allow height and instantaneous wind derivations at spatial resolutions of better than 1.1 km horizontally and 200m vertically. We will discuss the radiometric and stereo-retrieval capabilities of MISR specifically for dust, and demonstrate the use of MISR data in conjunction with other available satellite observations for dust property characterization and climate studies.First, we will discuss MISR non-spherical (dust) fraction product over the global oceans. We will show that over the Atlantic Ocean, changes in the MISR-derived non-spherical AOD fraction illustrate the evolution of dust during transport. Next, we will present a MISR satellite perspective on dust climatology in major dust source regions with a particular emphasis on the West Africa and Middle East and discuss MISR's unique strengths as well as current product biases. Finally, we will discuss MISR dust plume product and climatological applications.
1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...
1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Synthetic observations of protostellar multiple systems
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2018-04-01
Observations of protostars are often compared with synthetic observations of models in order to infer the underlying physical properties of the protostars. The majority of these models have a single protostar, attended by a disc and an envelope. However, observational and numerical evidence suggests that a large fraction of protostars form as multiple systems. This means that fitting models of single protostars to observations may be inappropriate. We produce synthetic observations of protostellar multiple systems undergoing realistic, non-continuous accretion. These systems consist of multiple protostars with episodic luminosities, embedded self-consistently in discs and envelopes. We model the gas dynamics of these systems using smoothed particle hydrodynamics and we generate synthetic observations by post-processing the snapshots using the SPAMCART Monte Carlo radiative transfer code. We present simulation results of three model protostellar multiple systems. For each of these, we generate 4 × 104 synthetic spectra at different points in time and from different viewing angles. We propose a Bayesian method, using similar calculations to those presented here, but in greater numbers, to infer the physical properties of protostellar multiple systems from observations.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Three-dimensional hologram display system
NASA Technical Reports Server (NTRS)
Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)
2009-01-01
The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.
Lee, Dong Yeon; Seo, Sang Gyo; Kim, Eo Jin; Kim, Sung Ju; Lee, Kyoung Min; Farber, Daniel C; Chung, Chin Youb; Choi, In Ho
2015-01-01
Radiographic examination is a widely used evaluation method in the orthopedic clinic. However, conventional radiography alone does not reflect the dynamic changes between foot and ankle segments during gait. Multiple 3-dimensional multisegment foot models (3D MFMs) have been introduced to evaluate intersegmental motion of the foot. In this study, we evaluated the correlation between static radiographic indices and intersegmental foot motion indices. One hundred twenty-five females were tested. Static radiographs of full-leg and anteroposterior (AP) and lateral foot views were performed. For hindfoot evaluation, we measured the AP tibiotalar angle (TiTA), talar tilt (TT), calcaneal pitch, lateral tibiocalcaneal angle, and lateral talcocalcaneal angle. For the midfoot segment, naviculocuboid overlap and talonavicular coverage angle were calculated. AP and lateral talo-first metatarsal angles and metatarsal stacking angle (MSA) were measured to assess the forefoot. Hallux valgus angle (HVA) and hallux interphalangeal angle were measured. In gait analysis by 3D MFM, intersegmental angle (ISA) measurements of each segment (hallux, forefoot, hindfoot, arch) were recorded. ISAs at midstance phase were most highly correlated with radiography. Significant correlations were observed between ISA measurements using MFM and static radiographic measurements in the same segment. In the hindfoot, coronal plane ISA was correlated with AP TiTA (P < .001) and TT (P = .018). In the hallux, HVA was strongly correlated with transverse ISA of the hallux (P < .001). The segmental foot motion indices at midstance phase during gait measured by 3D MFM gait analysis were correlated with the conventional radiographic indices. The observed correlation between MFM measurements at midstance phase during gait and static radiographic measurements supports the fundamental basis for the use of MFM in analysis of dynamic motion of foot segment during gait. © The Author(s) 2014.
Ultraminiature video-rate forward-view spectrally encoded endoscopy with straight axis configuration
NASA Astrophysics Data System (ADS)
Wang, Zhuo; Wu, Tzu-Yu; Hamm, Mark A.; Altshuler, Alexander; Mach, Anderson T.; Gilbody, Donald I.; Wu, Bin; Ganesan, Santosh N.; Chung, James P.; Ikuta, Mitsuhiro; Brauer, Jacob S.; Takeuchi, Seiji; Honda, Tokuyuki
2017-02-01
As one of the smallest endoscopes that have been demonstrated, the spectrally encoded endoscope (SEE) shows potential for the use in minimally invasive surgeries. While the original SEE is designed for side-view applications, the forwardview (FV) scope is more desired by physicians for many clinical applications because it provides a more natural navigation. Several FV SEEs have been designed in the past, which involve either multiple optical elements or one optical element with multiple optically active surfaces. Here we report a complete FV SEE which comprises a rotating illumination probe within a drive cable, a sheath and a window to cover the optics, a customized spectrometer, hardware controllers for both motor control and synchronization, and a software suite to capture, process and store images and videos. In this solution, the optical axis is straight and the dispersion element, i.e. the grating, is designed such that the slightly focused light after the focusing element will be dispersed by the grating, covering forward view angles with high diffraction efficiencies. As such, the illumination probe is fabricated with a diameter of only 275 μm. The twodimensional video-rate image acquisition is realized by rotating the illumination optics at 30 Hz. In one finished design, the scope diameter including the window assembly is 1.2 mm.
79. VIEW OF VAL FIRING RANGE LOOKING SOUTHWEST SHOWING LAUNCHER ...
79. VIEW OF VAL FIRING RANGE LOOKING SOUTHWEST SHOWING LAUNCHER BRIDGE, BARGES, SONAR BUOY RANGE AND MORRIS DAM IN BACKGROUND, June 10, 1948. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Mobile Robot Localization by Remote Viewing of a Colored Cylinder
NASA Technical Reports Server (NTRS)
Volpe, R.; Litwin, T.; Matthies, L.
1995-01-01
A system was developed for the Mars Pathfinder rover in which the rover checks its position by viewing the angle back to a colored cylinder with different colors for different angles. The rover determines distance by the apparent size of the cylinder.
New developments of a knowledge based system (VEG) for inferring vegetation characteristics
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Harrison, P. A.; Harrison, P. R.
1992-01-01
An extraction technique for inferring physical and biological surface properties of vegetation using nadir and/or directional reflectance data as input has been developed. A knowledge-based system (VEG) accepts spectral data of an unknown target as input, determines the best strategy for inferring the desired vegetation characteristic, applies the strategy to the target data, and provides a rigorous estimate of the accuracy of the inference. Progress in developing the system is presented. VEG combines methods from remote sensing and artificial intelligence, and integrates input spectral measurements with diverse knowledge bases. VEG has been developed to (1) infer spectral hemispherical reflectance from any combination of nadir and/or off-nadir view angles; (2) test and develop new extraction techniques on an internal spectral database; (3) browse, plot, or analyze directional reflectance data in the system's spectral database; (4) discriminate between user-defined vegetation classes using spectral and directional reflectance relationships; and (5) infer unknown view angles from known view angles (known as view angle extension).
Xiang, Yun; Yan, Lei; Zhao, Yun-sheng; Gou, Zhi-yang; Chen, Wei
2011-12-01
Polarized reflectance is influenced by such factors as its physical and chemical properties, the viewing geometry composed of light incident zenith, viewing zenith and viewing azimuth relative to light incidence, surface roughness and texture, surface density, detection wavelengths, polarization phase angle and so on. In the present paper, the influence of surface roughness on the degree of polarization (DOP) of biotite plagioclase gneiss varying with viewing angle was inquired and analyzed quantitatively. The polarized spectra were measured by ASD FS3 spectrometer on the goniometer located in Northeast Normal University. When the incident zenith angle was fixed at 50 degrees, it was showed that on the rock surfaces with different roughness, in the specular reflection direction, the DOP spectrum within 350-2500 nm increased to the highest value first, and then began to decline varying with viewing zenith angle from 0 degree to 80 degrees. The characterized band (520 +/- 10) nm was picked out for further analysis. The correlation analysis between the peak DOP value of zenith and surface roughness showed that they are in a power function relationship, with the regression equation: y = 0.604x(-0.297), R2 = 0.985 4. The correlation model of the angle where the peak is in and the surface roughness is y = 3.4194x + 51.584, y < 90 degrees , R2 = 0.8177. With the detecting azimuth farther away from 180 degrees azimuth where the maximum DOP exists, the DOP lowers gradually and tends to 0. In the detection azimuth 180 dgrees , the correlation analysis between the peak values of DOP on the (520 =/- 10) nm band for five rocks and their surface roughness indicates a power function, with the regression equation being y = 0.5822x(-0.333), R2 = 0.9843. F tests of the above regression models indicate that the peak value and its corresponding viewing angle correlate much with surface roughness. The study provides a theoretical base for polarization remote sensing, and impels the rock and city architecture discrimination and minerals mapping.
NASA Technical Reports Server (NTRS)
Gatebe, C. K.; King, M. D.; Tsay, S.-C.; Ji, Q.; Arnold, T.
2000-01-01
In this sensitivity study, we examined the ratio technique, the official method for remote sensing of aerosols over land from Moderate Resolution Imaging Spectroradiometer (MODIS) DATA, for view angles from nadir to 65 deg. off-nadir using Cloud Absorption Radiometer (CAR) data collected during the Smoke, Clouds, and Radiation-Brazil (SCAR-B) experiment conducted in 1995. For the data analyzed and for the view angles tested, results seem to suggest that the reflectance (rho)0.47 and (rho)0.67 are predictable from (rho)2.1 using: (rho)0.47 = (rho)2.1/6, which is a slight modification and (rho)0.67 = (rho)2.1/2. These results hold for target viewed from backscattered direction, but not for the forward direction.
Wang, Xingliang; Zhang, Youan; Wu, Huali
2016-03-01
The problem of impact angle control guidance for a field-of-view constrained missile against non-maneuvering or maneuvering targets is solved by using the sliding mode control theory. The existing impact angle control guidance laws with field-of-view constraint are only applicable against stationary targets and most of them suffer abrupt-jumping of guidance command due to the application of additional guidance mode switching logic. In this paper, the field-of-view constraint is handled without using any additional switching logic. In particular, a novel time-varying sliding surface is first designed to achieve zero miss distance and zero impact angle error without violating the field-of-view constraint during the sliding mode phase. Then a control integral barrier Lyapunov function is used to design the reaching law so that the sliding mode can be reached within finite time and the field-of-view constraint is not violated during the reaching phase as well. A nonlinear extended state observer is constructed to estimate the disturbance caused by unknown target maneuver, and the undesirable chattering is alleviated effectively by using the estimation as a compensation item in the guidance law. The performance of the proposed guidance law is illustrated with simulations. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
74. DETAIL VIEW OF INSIDE THE LAUNCHING BRIDGE LOOKING SOUTHWEST ...
74. DETAIL VIEW OF INSIDE THE LAUNCHING BRIDGE LOOKING SOUTHWEST SHOWING ADJUSTABLE STAIRS ON THE LEFT AND LAUNCHING TUBE ON THE RIGHT, Date unknown, circa 1948. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Optimal design of wide-view-angle waveplate used for polarimetric diagnosis of lithography system
NASA Astrophysics Data System (ADS)
Gu, Honggang; Jiang, Hao; Zhang, Chuanwei; Chen, Xiuguo; Liu, Shiyuan
2016-03-01
The diagnosis and control of the polarization aberrations is one of the main concerns in a hyper numerical aperture (NA) lithography system. Waveplates are basic and indispensable optical components in the polarimetric diagnosis tools for the immersion lithography system. The retardance of a birefringent waveplate is highly sensitive to the incident angle of the light, which makes the conventional waveplate not suitable to be applied in the polarimetric diagnosis for the immersion lithography system with a hyper NA. In this paper, we propose a method for the optimal design of a wideview- angle waveplate by combining two positive waveplates made from magnesium fluoride (MgF2) and two negative waveplates made from sapphire using the simulated annealing algorithm. Theoretical derivations and numerical simulations are performed and the results demonstrate that the maximum variation in the retardance of the optimally designed wide-view-angle waveplate is less than +/- 0.35° for a wide-view-angle range of +/- 20°.
View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.
2013-01-01
Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
The big picture: effects of surround on immersion and size perception.
Baranowski, Andreas M; Hecht, Heiko
2014-01-01
Despite the fear of the entertainment industry that illegal downloads of films might ruin their business, going to the movies continues to be a popular leisure activity. One reason why people prefer to watch movies in cinemas may be the surround of the movie screen or its physically huge size. To disentangle the factors that might contribute to the size impression, we tested several measures of subjective size and immersion in different viewing environments. For this purpose we built a model cinema that provided visual angle information comparable with that of a real cinema. Subjects watched identical movie clips in a real cinema, a model cinema, and on a display monitor in isolation. Whereas the isolated display monitor was inferior, the addition of a contextual model improved the viewing immersion to the extent that it was comparable with the movie theater experience, provided the viewing angle remained the same. In a further study we built an identical but even smaller model cinema to unconfound visual angle and viewing distance. Both model cinemas produced similar results. There was a trend for the larger screen to be more immersive; however, viewing angle did not play a role in how the movie was evaluated.
22. VAL, VIEW OF PROJECTILE LOADING DECK LOOKING NORTHEAST TOWARD ...
22. VAL, VIEW OF PROJECTILE LOADING DECK LOOKING NORTHEAST TOWARD TOP OF CONCRETE 'A' FRAME STRUCTURE SHOWING DRIVE CABLES, DRIVE GEAR, BOTTOM OF CAMERA TOWER AND 'CROWS NEST' CONTROL ROOM. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Wide angle view of the Flight control room of Mission control center
1984-10-06
Wide angle view of the flight control room (FCR) of the Mission Control Center (MCC). Some of the STS 41-G crew can be seen on a large screen at the front of the MCC along with a map tracking the progress of the orbiter.
Correction for reflected sky radiance in low-altitude coastal hyperspectral images.
Kim, Minsu; Park, Joong Yong; Kopilevich, Yuri; Tuell, Grady; Philpot, William
2013-11-10
Low-altitude coastal hyperspectral imagery is sensitive to reflections of sky radiance at the water surface. Even in the absence of sun glint, and for a calm water surface, the wide range of viewing angles may result in pronounced, low-frequency variations of the reflected sky radiance across the scan line depending on the solar position. The variation in reflected sky radiance can be obscured by strong high-spatial-frequency sun glint and at high altitude by path radiance. However, at low altitudes, the low-spatial-frequency sky radiance effect is frequently significant and is not removed effectively by the typical corrections for sun glint. The reflected sky radiance from the water surface observed by a low-altitude sensor can be modeled in the first approximation as the sum of multiple-scattered Rayleigh path radiance and the single-scattered direct-solar-beam radiance by the aerosol in the lower atmosphere. The path radiance from zenith to the half field of view (FOV) of a typical airborne spectroradiometer has relatively minimal variation and its reflected radiance to detector array results in a flat base. Therefore the along-track variation is mostly contributed by the forward single-scattered solar-beam radiance. The scattered solar-beam radiances arrive at the water surface with different incident angles. Thus the reflected radiance received at the detector array corresponds to a certain scattering angle, and its variation is most effectively parameterized using the downward scattering angle (DSA) of the solar beam. Computation of the DSA must account for the roll, pitch, and heading of the platform and the viewing geometry of the sensor along with the solar ephemeris. Once the DSA image is calculated, the near-infrared (NIR) radiance from selected water scan lines are compared, and a relationship between DSA and NIR radiance is derived. We then apply the relationship to the entire DSA image to create an NIR reference image. Using the NIR reference image and an atmospheric spectral reflectance look-up table, the low spatial frequency variation of the water surface-reflected atmospheric contribution is removed.
Wide angle view of Mission Control Center during Apollo 14 transmission
1971-01-31
S71-17122 (31 Jan. 1971) --- A wide angle overall view of the Mission Operations Control Room (MOCR) in the Mission Control Center at the Manned spacecraft Center. This view was photographed during the first color television transmission from the Apollo 14 Command Module. Projected on the large screen at the right front of the MOCR is a view of the Apollo 14 Lunar Module, still attached to the Saturn IVB stage. The Command and Service Modules were approaching the LM/S-IVB during transposition and docking maneuvers.
Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.
Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir
2016-06-01
This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.
Construction of a three-dimensional interactive model of the skull base and cranial nerves.
Kakizawa, Yukinari; Hongo, Kazuhiro; Rhoton, Albert L
2007-05-01
The goal was to develop an interactive three-dimensional (3-D) computerized anatomic model of the skull base for teaching microneurosurgical anatomy and for operative planning. The 3-D model was constructed using commercially available software (Maya 6.0 Unlimited; Alias Systems Corp., Delaware, MD), a personal computer, four cranial specimens, and six dry bones. Photographs from at least two angles of the superior and lateral views were imported to the 3-D software. Many photographs were needed to produce the model in anatomically complex areas. Careful dissection was needed to expose important structures in the two views. Landmarks, including foramen, bone, and dura mater, were used as reference points. The 3-D model of the skull base and related structures was constructed using more than 300,000 remodeled polygons. The model can be viewed from any angle. It can be rotated 360 degrees in any plane using any structure as the focal point of rotation. The model can be reduced or enlarged using the zoom function. Variable transparencies could be assigned to any structures so that the structures at any level can be seen. Anatomic labels can be attached to the structures in the 3-D model for educational purposes. This computer-generated 3-D model can be observed and studied repeatedly without the time limitations and stresses imposed by surgery. This model may offer the potential to create interactive surgical exercises useful in evaluating multiple surgical routes to specific target areas in the skull base.
Optimal directional view angles for remote-sensing missions
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Holben, B. N.; Tucker, C. J.; Newcomb, W. W.
1984-01-01
The present investigation is concerned with the directional, off-nadir viewing of terrestrial scenes using remote-sensing systems from aircraft and satellite platforms, taking into account advantages of such an approach over strictly nadir viewing systems. Directional reflectance data collected for bare soil and several different vegetation canopies in NOAA-7 AVHRR bands 1 and 2 were analyzed. Optimum view angles were recommended for two strategies. The first strategy views the utility of off-nadir measurements as extending spatial and temporal coverage of the target area. The second strategy views the utility of off-nadir measurements as providing additional information about the physical characteristics of the target. Conclusions regarding the two strategies are discussed.
Optimal angle of needle insertion for fluoroscopy-guided transforaminal epidural injection of L5.
Ra, In-Hoo; Min, Woo-Kie
2015-06-01
Unlike other sites, there is difficulty in performing TFESI at the L5-S1 level because the iliac crest is an obstacle to needle placement. The objective of this study was to identify the optimal angle of fluoroscopy for insertion and advancement of a needle during L5 TEFSI. We conducted an observational study of patients undergoing fluoroscopy-guided L5 TFESI in the prone position. A total of 80 patients (40 men and 40 women) with radiating pain of lower limbs were enrolled. During TFESI, we measured the angle at which the L5 vertebral body forms a rectangular shape and compared men and women. Then, we measured area of safe triangle in tilting angle of fluoroscopy from 15° to 35° and compared men and women. The mean cephalocaudal angle, where the vertebral body takes the shape of a rectangle, was 11.0° in men and 13.9° in women (P = 0.007). In men, the triangular area was maximal at 18.3 mm² with an oblique view angle of 25°. In women, the area was maximal at 23.6 mm² with an oblique view angle of 30°. At an oblique view angle of 30° and 35°, the area was significantly greater in women (P < 0.05). When TFESI is performed at the L5 region in the prone position, placement of fluoroscopy at a cephalocaudal angle of 11.0° and an oblique angle of 25° in men and cephalocaudal angle of 13.9° and an oblique angle of 30° in women would be most reasonable. © 2014 World Institute of Pain.
Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Yoo, Kwan-Hee; Baasantseren, Ganbat; Park, Jae-Hyeung; Kim, Eun-Soo; Kim, Nam
2014-04-15
We propose a 360 degree integral-floating display with an enhanced vertical viewing angle. The system projects two-dimensional elemental image arrays via a high-speed digital micromirror device projector and reconstructs them into 3D perspectives with a lens array. Double floating lenses relate initial 3D perspectives to the center of a vertically curved convex mirror. The anamorphic optic system tailors the initial 3D perspectives horizontally and vertically disperse light rays more widely. By the proposed method, the entire 3D image provides both monocular and binocular depth cues, a full-parallax demonstration with high-angular ray density and an enhanced vertical viewing angle.
Detection Angle Calibration of Pressure-Sensitive Paints
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
2000-01-01
Uses of the pressure-sensitive paint (PSP) techniques in areas other than external aerodynamics continue to expand. The NASA Glenn Research Center has become a leader in the application of the global technique to non-conventional aeropropulsion applications including turbomachinery testing. The use of the global PSP technique in turbomachinery applications often requires detection of the luminescent paint in confined areas. With the limited viewing usually available, highly oblique illumination and detection angles are common in the confined areas in these applications. This paper will describe the results of pressure, viewing and excitation angle dependence calibrations using three popular PSP formulations to get a better understanding of the errors associated with these non-traditional views.
Ultrasonic imaging of material flaws exploiting multipath information
NASA Astrophysics Data System (ADS)
Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.
2011-05-01
In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.
Concept development for the ITER equatorial port visible∕infrared wide angle viewing system.
Reichle, R; Beaumont, B; Boilson, D; Bouhamou, R; Direz, M-F; Encheva, A; Henderson, M; Huxford, R; Kazarian, F; Lamalle, Ph; Lisgo, S; Mitteau, R; Patel, K M; Pitcher, C S; Pitts, R A; Prakash, A; Raffray, R; Schunke, B; Snipes, J; Diaz, A Suarez; Udintsev, V S; Walker, C; Walsh, M
2012-10-01
The ITER equatorial port visible∕infrared wide angle viewing system concept is developed from the measurement requirements. The proposed solution situates 4 viewing systems in the equatorial ports 3, 9, 12, and 17 with 4 views each (looking at the upper target, the inner divertor, and tangentially left and right). This gives sufficient coverage. The spatial resolution of the divertor system is 2 times higher than the other views. For compensation of vacuum-vessel movements, an optical hinge concept is proposed. Compactness and low neutron streaming is achieved by orienting port plug doglegs horizontally. Calibration methods, risks, and R&D topics are outlined.
81. VIEW OF VAL LOOKING NORTH AS SEEN FROM THE ...
81. VIEW OF VAL LOOKING NORTH AS SEEN FROM THE RESERVOIR SHOWING TWO LAUNCHING TUBES ON THE LAUNCHER BRIDGE, Date unknown, circa 1952. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
63. VIEW LOOKING DOWN VAL LAUNCHING SLAB SHOWING DRIVE GEARS, ...
63. VIEW LOOKING DOWN VAL LAUNCHING SLAB SHOWING DRIVE GEARS, CABLES, LAUNCHER RAILS, PROJECTILE CAR AND SUPPORT CARRIAGE, April 8, 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
View angle effects on relationships between leaf area index in wheat and vegetation indices
NASA Astrophysics Data System (ADS)
Chen, H.; Li, W.; Huang, W.; Niu, Z.
2016-12-01
The effects of plant types and view angles on the canopy-reflected spectrum can not be ignored in the estimation of leaf area index (LAI) using remote sensing vegetation indices. While vegetation indices derived from nadir-viewing remote sensors are insufficient in leaf area index (LAI) estimation because of its misinterpretation of structural characteristecs, vegetation indices derived from multi-angular remote sensors have potential to improve detection of LAI. However, view angle effects on relationships between these indices and LAI for low standing crops (i.e. wheat) has not been fully evaluated and thus limits them to applied for consistent and accurate monitoring of vegetation. View angles effects of two types of winter wheat (wheat 411, erectophile; and wheat 9507, planophile) on relationship between LAI and spectral reflectance are assessed and compared in this study. An evaluation is conducted with in-situ measurements of LAI and bidirectional reflectance in the principal plane from -60° (back-scattering direction ) ot 60° (forward scattering direction) in the growth cycle of winter wheat. A variety of vegetation indices (VIs) published are calculated by BRDF. Additionally, all combinations of the bands are used in order to calculate Normalized difference Spectral Indices (NDSI) and Simple Subtraction Indices (SSI). The performance of the above indices along with raw reflectance and reflectance derivatives on LAI estimation are examined based on a linearity comparison. The results will be helpful in further developing multi-angle remote sensing models for accurate LAI evaluation.
Soybean canopy reflectance as a function of view and illumination geometry
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Ranson, K. J.; Vanderbilt, V. C.; Biehl, L. L.; Robinson, B. F.
1982-01-01
The results of an experiment designed to characterize a soybean field by its reflectance at various view and illumination angles and by its physical and agronomic attributes are presented. Reflectances were calculated from measurements at four wavelength bands through eight view azimuth and seven view zenith directions for various solar zenith and azimuth angles during portions of three days. An ancillary data set consisting of the agronomic and physical characteristics of the soybean field is described. The results indicate that the distribution of reflectance from a soybean field is a function of the solar illumination and viewing geometry, wavelength and row direction, as well as the state of development of the canopy. Shadows between rows greatly affected the reflectance in the visible wavelength bands and to a lesser extent in the near infrared wavelengths. A model is proposed that describes the reflectance variation as a function of projected solar and projected viewing angles. The model appears to approximate the reflectance variations in the visible wavelength bands from a canopy with well defined row structure.
Optics of wide-angle panoramic viewing system-assisted vitreous surgery.
Chalam, Kakarla V; Shah, Vinay A
2004-01-01
The purpose of the article is to describe the optics of the contact wide-angle lens system with stereo-reinverter for vitreous surgery. A panoramic viewing system is made up of two components; an indirect ophthalmoscopy lens system for fundus image viewing, which is placed on the patient's cornea as a contact lens, and a separate removable prism system for reinversion of the image mounted on the microscope above the zooming system. The system provides a 104 degrees field of view in a phakic emmetropic eye with minification, which can be magnified by the operating microscope. It permits a binocular stereoptic view even through a small pupil (3 mm) or larger. In an air-filled phakic eye, field of view increases to approximately 130 degrees. The obtained image of the patient's fundus is reinverted to form true, erect, stereoscopic image by the reinversion system. In conclusion, this system permits wide-angle panoramic view of the surgical field. The contact lens neutralizes the optical irregularities of the corneal surface and allows improved visualization in eyes with irregular astigmatism induced by corneal scars. Excellent visualization is achieved in complex clinical situations such as miotic pupils, lenticular opacities, and in air-filled phakic eyes.
Divided attention limits perception of 3-D object shapes
Scharff, Alec; Palmer, John; Moore, Cathleen M.
2013-01-01
Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes. PMID:23404158
57. INTERIOR VIEW OF VAL BRIDGE STRUCTURE SHOWING LAUNCHING TUBE, ...
57. INTERIOR VIEW OF VAL BRIDGE STRUCTURE SHOWING LAUNCHING TUBE, STAIRS AND PORTION OF LAUNCHING DECK. NOTE SUPPORT CARRIAGE ASSEMBLY IN DISTANCE. Date unknown, circa March 1948. (Original photograph in possession of Dave Willis, San Diego, California.) - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Characteristics of mist 3D screen for projection type electro-holography
NASA Astrophysics Data System (ADS)
Sato, Koki; Okumura, Toshimichi; Kanaoka, Takumi; Koizumi, Shinya; Nishikawa, Satoko; Takano, Kunihiko
2006-01-01
The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel (time shared CGH of RGB three colors). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.
Hologram generation by horizontal scanning of a high-speed spatial light modulator.
Takaki, Yasuhiro; Okada, Naoya
2009-06-10
In order to increase the image size and the viewing zone angle of a hologram, a high-speed spatial light modulator (SLM) is imaged as a vertically long image by an anamorphic imaging system, and this image is scanned horizontally by a galvano scanner. The reduction in horizontal pixel pitch of the SLM provides a wide viewing zone angle. The increased image height and horizontal scanning increased the image size. We demonstrated the generation of a hologram having a 15 degrees horizontal viewing zone angle and an image size of 3.4 inches with a frame rate of 60 Hz using a digital micromirror device with a frame rate of 13.333 kHz as a high-speed SLM.
On the viewing angle dependence of blazar variability
NASA Astrophysics Data System (ADS)
Eldar, Avigdor; Levinson, Amir
2000-05-01
Internal shocks propagating through an ambient radiation field are subject to a radiative drag that, under certain conditions, can significantly affect their dynamics, and consequently the evolution of the beaming cone of emission produced behind the shocks. The resultant change of the Doppler factor combined with opacity effects leads to a strong dependence on the viewing angle of the variability pattern produced by such systems; specifically, the shape of the light curves and the characteristics of correlated emission. One implication is that objects oriented at relatively large viewing angles to the observer should exhibit a higher level of activity at high synchrotron frequencies (above the self-absorption frequency), and also at gamma-ray energies below the threshold energy of pair production, than at lower (radio/millimetre) frequencies.
Zheng, Yongbin; Chen, Huimin; Zhou, Zongtan
2018-05-23
The accurate angle measurement of objects outside the linear field of view (FOV) is a challenging task for a strapdown semi-active laser seeker and is not yet well resolved. Considering the fact that the strapdown semi-active laser seeker is equipped with GPS and an inertial navigation system (INS) on a missile, in this work, we present an angle measurement method based on the fusion of the seeker’s data and GPS and INS data for a strapdown semi-active laser seeker. When an object is in the nonlinear FOV or outside the FOV, by solving the problems of space consistency and time consistency, the pitch angle and yaw angle of the object can be calculated via the fusion of the last valid angles measured by the seeker and the corresponding GPS and INS data. The numerical simulation results demonstrate the correctness and effectiveness of the proposed method.
Concept development for the ITER equatorial port visible/infrared wide angle viewing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichle, R.; Beaumont, B.; Boilson, D.
2012-10-15
The ITER equatorial port visible/infrared wide angle viewing system concept is developed from the measurement requirements. The proposed solution situates 4 viewing systems in the equatorial ports 3, 9, 12, and 17 with 4 views each (looking at the upper target, the inner divertor, and tangentially left and right). This gives sufficient coverage. The spatial resolution of the divertor system is 2 times higher than the other views. For compensation of vacuum-vessel movements, an optical hinge concept is proposed. Compactness and low neutron streaming is achieved by orienting port plug doglegs horizontally. Calibration methods, risks, and R and D topicsmore » are outlined.« less
NASA Technical Reports Server (NTRS)
McFarland, Shane M.
2008-01-01
Field of view has always been a design feature paramount to helmet design, and in particular space suit design, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. For Project Constellation, a slightly different approach to helmet requirement maturation was utilized; one that was less a direct function of body position and suit pressure and more a function of the mission segment in which the field of view is required. Through taxonimization of various parameters that affect suited FOV, as well as consideration for possible nominal and contingency operations during that mission segment, a reduction process was able to condense the large number of possible outcomes to only six unique field of view angle requirements that still captured all necessary variables without sacrificing fidelity. The specific field of view angles were defined by considering mission segment activities, historical performance of other suits, comparison between similar requirements (pressure visor up versus down, etc.), estimated requirements from other teams for field of view (Orion, Altair, EVA), previous field of view tests, medical data for shirtsleeve field of view performance, and mapping of visual field data to generate 45degree off-axis field of view requirements. Full resolution of several specific field of view angle requirements warranted further work, which consisted of low and medium fidelity field of view testing in the rear entry ISuit and DO27 helmet prototype. This paper serves to document this reduction progress and followup testing employed to write the Constellation requirements for helmet field of view.
Biophysical and spectral modeling for crop identification and assessment
NASA Technical Reports Server (NTRS)
Goel, N. S. (Principal Investigator)
1984-01-01
The development of a technique for estimating all canopy parameters occurring in a canopy reflectance model from the measured canopy reflectance data is summarized. The Suits and the SAIL model for a uniform and homogeneous crop canopy were used to determine if the leaf area index and the leaf angle distribution could be estimated. Optimal solar/view angles for measuring CR were also investigated. The use of CR in many wavelengths or spectral bands and of linear and nonlinear transforms of CRs for various solar/view angles and various spectral bands is discussed as well as the inversion of rediance data inside the canopy, angle transforms for filtering out terrain slope effects, and modification of one dimensional models.
Robust human detection, tracking, and recognition in crowded urban areas
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.
NASA Astrophysics Data System (ADS)
Lu, Wei; Sun, Jianfeng; Hou, Peipei; Xu, Qian; Xi, Yueli; Zhou, Yu; Zhu, Funan; Liu, Liren
2017-08-01
Performance of satellite laser communications between GEO and LEO satellites can be influenced by background light noise appeared in the field of view due to sunlight or planets and some comets. Such influences should be studied on the ground testing platform before the space application. In this paper, we introduce a simulator that can simulate the real case of background light noise in space environment during the data talking via laser beam between two lonely satellites. This simulator can not only simulate the effect of multi-wavelength spectrum, but also the effects of adjustable angles of field-of-view, large range of adjustable optical power and adjustable deflection speeds of light noise in space environment. We integrate these functions into a device with small and compact size for easily mobile use. Software control function is also achieved via personal computer to adjust these functions arbitrarily. Keywords:
Ash from Kilauea Eruption Viewed by NASA's MISR
Atmospheric Science Data Center
2018-06-07
... title: Ash from Kilauea Eruption Viewed by NASA's MISR View Larger Image Ash ... Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite captured this view of the island as it passed overhead. ...
2015-03-30
After a couple of years in high-inclination orbits that limited its ability to encounter Saturn's moons, NASA's Cassini spacecraft returned to Saturn's equatorial plane in March 2015. As a prelude to its return to the realm of the icy satellites, the spacecraft had its first relatively close flyby of an icy moon (apart from Titan) in almost two years on Feb. 9. During this encounter Cassini's cameras captured images of the icy moon Rhea, as shown in these in two image mosaics. The views were taken about an hour and a half apart as Cassini drew closer to Rhea. Images taken using clear, green, infrared and ultraviolet spectral filters were combined to create these enhanced color views, which offer an expanded range of the colors visible to human eyes in order to highlight subtle color differences across Rhea's surface. The moon's surface is fairly uniform in natural color. The image at right represents one of the highest resolution color views of Rhea released to date. A larger, monochrome mosaic is available in PIA07763. Both views are orthographic projections facing toward terrain on the trailing hemisphere of Rhea. An orthographic view is most like the view seen by a distant observer looking through a telescope. The views have been rotated so that north on Rhea is up. The smaller view at left is centered at 21 degrees north latitude, 229 degrees west longitude. Resolution in this mosaic is 450 meters (1,476 feet) per pixel. The images were acquired at a distance that ranged from about 51,200 to 46,600 miles (82,100 to 74,600 kilometers) from Rhea. The larger view at right is centered at 9 degrees north latitude, 254 degrees west longitude. Resolution in this mosaic is 300 meters (984 feet) per pixel. The images were acquired at a distance that ranged from about 36,000 to 32,100 miles (57,900 to 51,700 kilometers) from Rhea. The mosaics each consist of multiple narrow-angle camera (NAC) images with data from the wide-angle camera used to fill in areas where NAC data was not available. The image was produced by Heike Rosenberg and Tilmann Denk at Freie Universität in Berlin, Germany. http://photojournal.jpl.nasa.gov/catalog/PIA19057
High Spectral Resolution Lidar Measurements of Multiple Scattering
NASA Technical Reports Server (NTRS)
Eloranta, E. W.; Piironen, P.
1996-01-01
The University of Wisconsin High Spectral Resolution Lidar (HSRL) provides unambiguous measurements of backscatter cross section, backscatter phase function, depolarization, and optical depth. This is accomplished by dividing the lidar return into separate particulate and molecular contributions. The molecular return is then used as a calibration target. We have modified the HSRL to use an I2 molecular absorption filter to separate aerosol and molecular signals. This allows measurement in dense clouds. Useful profiles extend above the cloud base until the two way optical depth reaches values between 5 and 6; beyond this, photon counting errors become large. In order to observe multiple scattering, the HSRL includes a channel which records the combined aerosol and molecular lidar return simultaneously with the spectrometer channel measurements of optical properties. This paper describes HSRL multiple scattering measurements from both water and ice clouds. These include signal strengths and depolarizations as a function of receiver field of view. All observations include profiles of extinction and backscatter cross sections. Measurements are also compared to predictions of a multiple scattering model based on small angle approximations.
3. Elevation view of entire midsection using ultrawide angle lens. ...
3. Elevation view of entire midsection using ultrawide angle lens. Note opened south doors and closed north doors. The following photo WA-203-C-4 is similar except the camera position was moved right to include the slope of the south end. - Puget Sound Naval Shipyard, Munitions Storage Bunker, Naval Ammunitions Depot, South of Campbell Trail, Bremerton, Kitsap County, WA
The pigeon's distant visual acuity as a function of viewing angle.
Uhlrich, D J; Blough, P M; Blough, D S
1982-01-01
Distant visual acuity was determined for several viewing angles in two restrained White Carneaux pigeons. The behavioral technique was a classical conditioning procedure that paired presentation of sinusoidal gratings with shock. A conditioned heart rate acceleration during the grating presentation indicated resolution of the grating. The bird's acuity was fairly uniform across a large range of their lateral visual field; performance decreased slightly for posterior stimulus placement and sharply for frontal placements. The data suggest that foveal viewing is relatively less advantageous for acuity in pigeons than in humans. The data are also consistent with the current view that pigeons are myopic in frontal vision.
Krotkov, N A; Vasilkov, A P
2000-03-20
Use of a vertical polarizer has been suggested to reduce the effects of surface reflection in the above-water measurements of marine reflectance. We suggest using a similar technique for airborne or spaceborne sensors when atmospheric scattering adds its own polarization signature to the upwelling radiance. Our own theoretical sensitivity study supports the recommendation of Fougnie et al. [Appl. Opt. 38, 3844 (1999)] (40-50 degrees vertical angle and azimuth angle near 135 degrees, polarizer parallel to the viewing plane) for above-water measurements. However, the optimal viewing directions (and the optimal orientation of the polarizer) change with altitude above the sea surface, solar angle, and atmospheric vertical optical structure. A polarization efficiency function is introduced, which shows the maximal possible polarization discrimination of the background radiation for an arbitrary altitude above the sea surface, viewing direction, and solar angle. Our comment is meant to encourage broader application of airborne and spaceborne polarization sensors in remote sensing of water and sea surface properties.
Integrated large view angle hologram system with multi-slm
NASA Astrophysics Data System (ADS)
Yang, ChengWei; Liu, Juan
2017-10-01
Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.
A Wide Field of View Plasma Spectrometer
Skoug, Ruth M.; Funsten, Herbert O.; Moebius, Eberhard; ...
2016-07-01
Here we present a fundamentally new type of space plasma spectrometer, the wide field of view plasma spectrometer, whose field of view is >1.25π ster using fewer resources than traditional methods. The enabling component is analogous to a pinhole camera with an electrostatic energy-angle filter at the image plane. Particle energy-per-charge is selected with a tunable bias voltage applied to the filter plate relative to the pinhole aperture plate. For a given bias voltage, charged particles from different directions are focused by different angles to different locations. Particles with appropriate locations and angles can transit the filter plate and aremore » measured using a microchannel plate detector with a position-sensitive anode. Full energy and angle coverage are obtained using a single high-voltage power supply, resulting in considerable resource savings and allowing measurements at fast timescales. Lastly, we present laboratory prototype measurements and simulations demonstrating the instrument concept and discuss optimizations of the instrument design for application to space measurements.« less
Panoramic cone beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Zhou Lili; Wang Song
2012-05-15
Purpose: Cone-beam computed tomography (CBCT) is the main imaging tool for image-guided radiotherapy but its functionality is limited by a small imaging volume and restricted image position (imaged at the central instead of the treatment position for peripheral lesions to avoid collisions). In this paper, the authors present the concept of ''panoramic CBCT,'' which can image patients at the treatment position with an imaging volume as large as practically needed. Methods: In this novel panoramic CBCT technique, the target is scanned sequentially from multiple view angles. For each view angle, a half scan (180 deg. + {theta}{sub cone} where {theta}{submore » cone} is the cone angle) is performed with the imaging panel positioned in any location along the beam path. The panoramic projection images of all views for the same gantry angle are then stitched together with the direct image stitching method (i.e., according to the reported imaging position) and full-fan, half-scan CBCT reconstruction is performed using the stitched projection images. To validate this imaging technique, the authors simulated cone-beam projection images of the Mathematical Cardiac Torso (MCAT) thorax phantom for three panoramic views. Gaps, repeated/missing columns, and different exposure levels were introduced between adjacent views to simulate imperfect image stitching due to uncertainties in imaging position or output fluctuation. A modified simultaneous algebraic reconstruction technique (modified SART) was developed to reconstruct CBCT images directly from the stitched projection images. As a gold standard, full-fan, full-scan (360 deg. gantry rotation) CBCT reconstructions were also performed using projection images of one imaging panel large enough to encompass the target. Contrast-to-noise ratio (CNR) and geometric distortion were evaluated to quantify the quality of reconstructed images. Monte Carlo simulations were performed to evaluate the effect of scattering on the image quality and imaging dose for both standard and panoramic CBCT. Results: Truncated images with artifacts were observed for the CBCT reconstruction using projection images of the central view only. When the image stitching was perfect, complete reconstruction was obtained for the panoramic CBCT using the modified SART with the image quality similar to the gold standard (full-scan, full-fan CBCT using one large imaging panel). Imperfect image stitching, on the other hand, lead to (streak, line, or ring) reconstruction artifacts, reduced CNR, and/or distorted geometry. Results from Monte Carlo simulations showed that, for identical imaging quality, the imaging dose was lower for the panoramic CBCT than that acquired with one large imaging panel. For the same imaging dose, the CNR of the three-view panoramic CBCT was 50% higher than that of the regular CBCT using one big panel. Conclusions: The authors have developed a panoramic CBCT technique and demonstrated with simulation data that it can image tumors of any location for patients of any size at the treatment position with comparable or less imaging dose and time. However, the image quality of this CBCT technique is sensitive to the reconstruction artifacts caused by imperfect image stitching. Better algorithms are therefore needed to improve the accuracy of image stitching for panoramic CBCT.« less
A new technique for the measurement of surface shear stress vectors using liquid crystal coatings
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muratore, J. J., Jr.
1994-01-01
Research has recently shown that liquid crystal coating (LCC) color-change response to shear depends on both shear stress magnitude and direction. Additional research was thus conducted to extend the LCC method from a flow-visualization tool to a surface shear stress vector measurement technique. A shear-sensitive LCC was applied to a planar test surface and illuminated by white light from the normal direction. A fiber optic probe was used to capture light scattered by the LCC from a point on the centerline of a turbulent, tangential-jet flow. Both the relative shear stress magnitude and the relative in-plane view angle between the sensor and the centerline shear vector were systematically varied. A spectrophotometer was used to obtain scattered-light spectra which were used to quantify the LCC color (dominant wavelength) as a function of shear stress magnitude and direction. At any fixed shear stress magnitude, the minimum dominant wavelength was measured when the shear vector was aligned with and directed away from the observer; changes in the relative in-plane view angle to either side of this vector/observer aligned position resulted in symmetric Gaussian increases in measured dominant wavelength. Based on these results, a vector measurement methodology, involving multiple oblique-view observations of the test surface, was formulated. Under present test conditions, the measurement resolution of this technique was found to be +/- 1 deg for vector orientations and +/- 5% for vector magnitudes. An approach t o extend the present methodology to full-surface applications is proposed.
Mapping wave breaking and residual foam using infrared remote sensing
NASA Astrophysics Data System (ADS)
Carini, R. J.; Jessup, A. T.; Chickadel, C.
2012-12-01
Quantifying wave breaking in the surfzone is important for the advancement of models that seek to accurately predict energy dissipation, near-shore circulation, wave-current interactions, and air-sea gas transfer. Electro-optical remote sensing has been used to try to identify breaking waves. However, the residual foam, left over after the wave has broken, is indistinguishable from active foam in the visible band, which makes identification of active breaking difficult. Here, we explore infrared remote sensing of breaking waves at near-grazing incidence angles to differentiate between active and residual foam in the surfzone. Measurements were made at two field sites: Duck, NC, in September 2010 (Surf Zone Optics) and New River Inlet, NC, in May 2012 (RIVET). At both sites, multiple IR cameras were mounted to a tower onshore, viewing the surfzone at near-grazing incidence angles. For near-grazing incidence angles, small changes in viewing angle, such as those produced by the slope of a wave face, cause large modulations of the infrared signal. Therefore, the passage of waves can be seen in IR imagery. Wave breaking, however, is identified by the resulting foam. Foam has a higher emissivity than undisturbed water and thus appears warmer in an IR image. Residual foam cools quickly [Marmorino and Smith, 2005], thereby making its signal distinct from that of foam produced during active wave breaking. We will use these properties to develop a technique to produce spatial and temporal maps of active breaking and residual foam. These products can then be used to validate current models of surfzone bubbles and foam coverage. From the maps, we can also estimate energy dissipation due to wave breaking in the surfzone and compare this to estimates made with in situ data.; Infrared image of the surfzone at Duck, NC. Examples of actively breaking foam and cool residual foam are labeled.
Orita, Sumihisa; Yamagata, Masatsune; Ikeda, Yoshikazu; Nakajima, Fumitake; Aoki, Yasuchika; Nakamura, Junichi; Takahashi, Kazuhisa; Suzuki, Takane; Ohtori, Seiji
2015-10-17
Lumbar floating fusion occasionally causes postoperative adjacent segment disorder (ASD) at lumbosacral level, causing L5 spinal nerve disorder by L5-S1 foraminal stenosis. The disorder is considered to be one of the major outcomes of L5-S1 ASD, which has not been evaluated yet. The present study aimed to evaluate the incidence and risk factors of postoperative L5 spinal nerve disorder after lumbar interbody fusion extending to the L5 vertebra. We evaluated 125 patients with a diagnosis of spondylolisthesis who underwent floating fusion surgery with transforaminal lumbar interbody fusion with average postoperative period of 25.2 months. The patients were regarded as symptomatic with postoperative L5 spinal nerve disorder such as radicular pain/numbness in the lower limbs and/or motor dysfunction. We estimated and compared the wedging angle (frontal view) and height (lateral view) of the lumbosacral junction in pre- and postoperative plain X-ray images and the foraminal ratio (ratio of the narrower foraminal diameter to the wider diameter in the craniocaudal direction) in the preoperative magnetic resonance image. Risk factors for the incidence of L5 spinal nerve disorder were explored using multivariate logistic regression. Eight of the 125 patients (6.4%) were categorized as symptomatic, an average of 13.3 months after surgery. The wedging angle was significantly higher, and the foraminal ratio was significantly decreased in the symptomatic group (both P < 0.05) compared to the asymptomatic group. Multivariate logistic regression analysis of possible risk factors revealed that the wedging angle, foraminal ratio, and multileveled fusion were statistically significant. Higher wedging angle and lower foraminal ratio in the lumbosacral junction were significantly predictive for the incidence of L5 nerve root disorder as well as multiple-leveled fusion. These findings indicate that lumbosacral fixation should be considered for patients with these risk factors even if they have few symptoms from the L5-S1 junction.
NASA Astrophysics Data System (ADS)
Xu, F.; van Harten, G.; Kalashnikova, O. V.; Diner, D. J.; Seidel, F. C.; Garay, M. J.; Dubovik, O.
2016-12-01
The Airborne Multi-angle SpectroPolarimetric Imager (AirMSPI) [1] has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI acquires radiance and polarization data at 355, 380, 445, 470*, 555, 660*, 865*, and 935 nm (* denotes polarimetric bands). The imaged area covers about 10 km by 10 km and is observed from 9 view angles between ±67° off of nadir. We have developed an efficient and flexible code that uses the information content of AirMSPI data for a coupled retrieval of aerosol properties and surface reflection. The retrieval was built based on the multi-pixel optimization concept [2], with the use of a hybrid radiative transfer model [3] that combines the Markov Chain [4] and adding/doubling methods [5]. The convergence and robustness of our algorithm is ensured by applying constraints on (a) the spectral variation of the Bidirectional Polarization Distribution Function (BPDF) and angular shape of the Bidirectional Reflectance Distribution Function (BRDF); (b) the spectral variation of aerosol optical properties; and (c) the spatial variation of aerosol parameters across neighboring image pixels. Our retrieval approach has been tested using over 20 AirMSPI datasets having low to moderately high aerosol loadings ( 0.02550-nm< 0.45) and acquired during several field campaigns. Results are compared with AERONET aerosol reference data. We also explore the benefits of AirMSPI's ultraviolet and polarimetric bands as well as the use of multiple view angles. References[1]. D. J. Diner, et al. Atmos. Meas. Tech. 6, 1717 (2013). [2]. O. Dubovik et al. Atmos. Meas. Tech. 4, 975 (2011). [3]. F. Xu et al. Atmos. Meas. Tech. 9, 2877 (2016). [4]. F. Xu et al. Opt. Lett. 36, 2083 (2011). [5]. J. E. Hansen and L.D. Travis. Space Sci. Rev. 16, 527 (1974).
THOR: Cloud Thickness from Off beam Lidar Returns
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; McGill, Matthew; Kolasinski, John; Varnai, Tamas; Yetzer, Ken
2004-01-01
Conventional wisdom is that lidar pulses do not significantly penetrate clouds having optical thickness exceeding about tau = 2, and that no returns are detectable from more than a shallow skin depth. Yet optically thicker clouds of tau much greater than 2 reflect a larger fraction of visible photons, and account for much of Earth s global average albedo. As cloud layer thickness grows, an increasing fraction of reflected photons are scattered multiple times within the cloud, and return from a diffuse concentric halo that grows around the incident pulse, increasing in horizontal area with layer physical thickness. The reflected halo is largely undetected by narrow field-of-view (FoV) receivers commonly used in lidar applications. THOR - Thickness from Off-beam Returns - is an airborne wide-angle detection system with multiple FoVs, capable of observing the diffuse halo, detecting wide-angle signal from which physical thickness of optically thick clouds can be retrieved. In this paper we describe the THOR system, demonstrate that the halo signal is stronger for thicker clouds, and validate physical thickness retrievals for clouds having z > 20, from NASA P-3B flights over the Department of Energy/Atmospheric Radiation Measurement/Southern Great Plains site, using the lidar, radar and other ancillary ground-based data.
Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging
NASA Astrophysics Data System (ADS)
Lin, Bingxiong; Sun, Yu; Qian, Xiaoning
2013-03-01
Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Je, Uikyu; Cho, Hyosung; Lee, Minsik; Oh, Jieun; Park, Yeonok; Hong, Daeki; Park, Cheulkyu; Cho, Heemoon; Choi, Sungil; Koo, Yangseo
2014-06-01
Recently, reducing radiation doses has become an issue of critical importance in the broader radiological community. As a possible technical approach, especially, in dental cone-beam computed tomography (CBCT), reconstruction from limited-angle view data (< 360°) would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction algorithm based on compressed-sensing (CS) theory for the scan geometry and performed systematic simulation works to investigate the image characteristics. We also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in incomplete data problems. We successfully reconstructed CBCT images with incomplete projections acquired at selected scan angles of 120, 150, 180, and 200° with a fixed angle step of 1.2° and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from limited-angle view data show that the algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality.
Attenuation of multiples in image space
NASA Astrophysics Data System (ADS)
Alvarez, Gabriel F.
In complex subsurface areas, attenuation of 3D specular and diffracted multiples in data space is difficult and inaccurate. In those areas, image space is an attractive alternative. There are several reasons: (1) migration increases the signal-to-noise ratio of the data; (2) primaries are mapped to coherent events in Subsurface Offset Domain Common Image Gathers (SODCIGs) or Angle Domain Common Image Gathers (ADCIGs); (3) image space is regular and smaller; (4) attenuating the multiples in data space leaves holes in the frequency-Wavenumber space that generate artifacts after migration. I develop a new equation for the residual moveout of specular multiples in ADCIGs and use it for the kernel of an apex-shifted Radon transform to focus and separate the primaries from specular and diffracted multiples. Because of small amplitude, phase and kinematic errors in the multiple estimate, we need adaptive matching and subtraction to estimate the primaries. I pose this problem as an iterative least-squares inversion that simultaneously matches the estimates of primaries and multiples to the data. Standard methods match only the estimate of the multiples. I demonstrate with real and synthetic data that the method produces primaries and multiples with little cross-talk. In 3D, the multiples exhibit residual moveout in SODCIGs in in-line and cross-line offsets. They map away from zero subsurface offsets when migrated with the faster velocity of the primaries. In ADCIGs the residual moveout of the primaries as a function of the aperture angle, for a given azimuth, is flat for those angles that illuminate the reflector. The multiples have residual moveout towards increasing depth for increasing aperture angles at all azimuths. As a function of azimuth, the primaries have better azimuth resolution than the multiples at larger aperture angles. I show, with a real 3D dataset, that even below salt, where illumination is poor, the multiples are well attenuated in ADCIGs with the new Radon transform in planes of azimuth-stacked ADCIGs. The angle stacks of the estimated primaries show little residual multiple energy.
System for star catalog equalization to enhance attitude determination
NASA Technical Reports Server (NTRS)
Liu, Yong (Inventor); Wu, Yeong-Wei Andy (Inventor); Li, Rongsheng (Inventor)
2001-01-01
An apparatus for star catalog equalization to enhance attitude determination includes a star tracker, a star catalog and a controller. The star tracker is used to sense the positions of stars and generate signals corresponding to the positions of the stars as seen in its field of view. The star catalog contains star location data that is stored using a primary and multiple secondary arrays sorted by both declination (DEC) and right ascension (RA), respectively. The star location data stored in the star catalog is predetermined by calculating a plurality of desired star locations, associating one of a plurality of stars with each of the plurality of desired star locations based upon a neighborhood association angle to generate an associated plurality of star locations: If an artificial star gap occurs during association, then the neighborhood association angle for reassociation is increased. The controller uses the star catalog to determine which stars to select to provide star measurement residuals for correcting gyroscope bias and spacecraft attitude.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...
2016-11-28
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
NASA Astrophysics Data System (ADS)
Xu, F.; Diner, D. J.; Seidel, F. C.; Dubovik, O.; Zhai, P.
2014-12-01
A vector Markov chain radiative transfer method was developed for forward modeling of radiance and polarization fields in a coupled atmosphere-ocean system. The method was benchmarked against an independent Successive Orders of Scattering code and linearized through the use of Jacobians. Incorporated with the multi-patch optimization algorithm and look-up-table method, simultaneous aerosol and ocean color retrievals were performed using imagery acquired by the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) when it was operated in step-and-stare mode with 9 viewing angles ranging between ±67°. Data from channels near 355, 380, 445, 470*, 555, 660*, and 865* nm were used in the retrievals, where the asterisk denotes the polarimetric bands. Retrievals were run for AirMSPI overflights over Southern California and Monterey Bay, CA. For the relatively high aerosol optical depth (AOD) case (~0.28 at 550 nm), the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentration were compared to those reported by the USC SeaPRISM AERONET-OC site off the coast of Southern California on 6 February 2013. For the relatively low AOD case (~0.08 at 550 nm), the retrieved aerosol concentration and size distribution were compared to those reported by the Monterey Bay AERONET site on 28 April 2014. Further, we evaluate the benefits of multi-angle and polarimetric observations by performing the retrievals using (a) all view angles and channels; (b) all view angles but radiances only (no polarization); (c) the nadir view angle only with both radiance and polarization; and (d) the nadir view angle without polarization. Optimized retrievals using different initial guesses were performed to provide a measure of retrieval uncertainty. Removal of multi-angular or polarimetric information resulted in increases in both parameter uncertainty and systematic bias. Potential accuracy improvements afforded by applying constraints on the surface and aerosol parametric models will also be discussed.
Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K
2014-07-01
We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
On-orbit Characterization of RVS for MODIS Thermal Emissive Bands
NASA Technical Reports Server (NTRS)
Xiong, X.; Salomonson, V.; Chiang, K.; Wu, A.; Guenther, B.; Barnes, W.
2004-01-01
Response versus scan angle (RVS) is a key calibration parameter for remote sensing radiometers that make observations using a scanning optical system, such as a scan mirror in MODIS and GLI or a rotating telescope in SeaWiFS and VIIRS, since the calibration is typically performed at a fixed viewing angle while the Earth scene observations are made over a range of viewing angles. Terra MODIS has been in operation for more than four years since its launch in December 1999. It has 36 spectral bands covering spectral range from visible (VIS) to long-wave infrared (LWIR). It is a cross-track scanning radiometer using a two-sided paddle wheel scan mirror, making observations over a wide field of view (FOV) of +/-55 deg from the instrument nadir. This paper describes on-orbit characterization of MODIS RVS for its thermal emissive bands (TEB), using the Earth view data collected during Terra spacecraft deep space maneuvers (DSM). Comparisons with pre-launch analysis and early on-orbit measurements are also provided.
NASA Astrophysics Data System (ADS)
Edwards, C. S.; Bandfield, J. L.; Christensen, P. R.
2006-12-01
It is possible to obtain surface roughness characteristics, by measuring a single surface from multiple emission angles and azimuths in the thermal infrared. Surfaces will have different temperatures depending on their orientation relative to the sun. A different proportion of sunlit versus shaded surfaces will be in the field of view based on the viewing orientation, resulting in apparent temperature differences. This difference in temperature can be utilized to calculate the slope characteristics for the observed area. This technique can be useful for determining surface slope characteristics not resolvable by orbital imagery. There are two main components to this model, a surface DEM, in this case a synthetic, two dimensional sine wave surface, and a thermal model (provided by H. Kieffer). Using albedo, solar longitude, slope, azimuth, along with several other parameters, the temperature for each cell of the DEM is calculated using the thermal model. A temperature is then predicted using the same observation geometries as the Thermal Emission Spectrometer (TES) observations. A temperature difference is calculated for the two complementary viewing azimuths and emission angles from the DEM. These values are then compared to the observed temperature difference to determine the surface slope. This method has been applied to TES Emission Phase Function (EPF) observations for both the spectrometer and bolometer data, with a footprint size of 10s of kilometers. These specialized types of TES observations measure nearly the same surface from several angles. Accurate surface kinetic temperatures are obtained after the application of an atmospheric correction for the TES bolometer and/or spectrometer. Initial results include an application to the northern circumpolar dunes. An average maximum slope of ~33 degrees has been obtained, which makes physical sense since this is near the angle of repose for sand sized particles. There is some scatter in the data from separate observations, which may be due to the large footprint size. This technique can be better understood and characterized by correlation with high resolution imagery. Several different surface maps will also be tested in addition to the two dimensional sine wave surface. Finally, by modeling the thermal effects on different particle sizes and land forms, we can further interpret the scale of these slopes.
NPP VIIRS on-orbit calibration and characterization using the moon
NASA Astrophysics Data System (ADS)
Sun, J.; Xiong, X.; Butler, J.
2012-09-01
The Visible Infrared Imager Radiometer Suite (VIIRS) is one of five instruments on-board the Suomi National Polarorbiting Partnership (NPP) satellite that launched from Vandenberg Air Force Base, Calif., on Oct. 28, 2011. VIIRS has been scheduled to view the Moon approximately monthly with a spacecraft roll maneuver after its NADIR door open on November 21, 2012. To reduce the uncertainty of the radiometric calibration due to the view geometry, the lunar phase angles of the scheduled lunar observations were confined in the range from -56° to -55° in the first three scheduled lunar observations and then changed to the range from -51.5° to -50.5°, where the negative sign for the phase angles indicates that the VIIRS views a waxing moon. Unlike the MODIS lunar observations, most scheduled VIIRS lunar views occur on the day side of the Earth. For the safety of the instrument, the roll angles of the scheduled VIIRS lunar observations are required to be within [-14°, 0°] and the aforementioned change of the phase angle range was aimed to further minimize the roll angle required for each lunar observation while keeping the number of months in which the moon can be viewed by the VIIRS instrument each year unchanged. The lunar observations can be used to identify if there is crosstalk in VIIRS bands and to track on-orbit changes in VIIRS Reflective Solar Bands (RSB) detector gains. In this paper, we report our results using the lunar observations to examine the on-orbit crosstalk effects among NPP VIIRS bands, to track the VIIRS RSB gain changes in first few months on-orbit, and to compare the gain changes derived from lunar and SD/SDSM calibration.
NPP VIIRS On-Orbit Calibration and Characterization Using the Moon
NASA Technical Reports Server (NTRS)
Sun, J.; Xiong, X.; Butler, J.
2012-01-01
The Visible Infrared Imager Radiometer Suite (VIIRS) is one of five instruments on-board the Suomi National Polar orbiting Partnership (NPP) satellite that launched from Vandenberg Air Force Base, Calif., on Oct. 28, 2011. VIIRS has been scheduled to view the Moon approximately monthly with a spacecraft roll maneuver after its NADIR door open on November 21, 2011. To reduce the uncertainty of the radiometric calibration due to the view geometry, the lunar phase angles of the scheduled lunar observations were confined in the range from -56 deg to -55 deg in the first three scheduled lunar observations and then changed to the range from -51.5 deg to -50.5 deg, where the negative sign for the phase angles indicates that the VIIRS views a waxing moon. Unlike the MODIS lunar observations, most scheduled VIIRS lunar views occur on the day side of the Earth. For the safety of the instrument, the roll angles of the scheduled VIIRS lunar observations are required to be within [-14 deg, 0 deg] and the aforementioned change of the phase angle range was aimed to further minimize the roll angle required for each lunar observation while keeping the number of months in which the moon can be viewed by the VIIRS instrument each year unchanged. The lunar observations can be used to identify if there is crosstalk in VIIRS bands and to track on-orbit changes in VIIRS Reflective Solar Bands (RSB) detector gains. In this paper, we report our results using the lunar observations to examine the on-orbit crosstalk effects among NPP VIIRS bands, to track the VIIRS RSB gain changes in first few months on-orbit, and to compare the gain changes derived from lunar and SD/SDSM calibration.
Measuring contact angle and meniscus shape with a reflected laser beam.
Eibach, T F; Fell, D; Nguyen, H; Butt, H J; Auernhammer, G K
2014-01-01
Side-view imaging of the contact angle between an extended planar solid surface and a liquid is problematic. Even when aligning the view perfectly parallel to the contact line, focusing one point of the contact line is not possible. We describe a new measurement technique for determining contact angles with the reflection of a widened laser sheet on a moving contact line. We verified this new technique measuring the contact angle on a cylinder, rotating partially immersed in a liquid. A laser sheet is inclined under an angle φ to the unperturbed liquid surface and is reflected off the meniscus. Collected on a screen, the reflection image contains information to determine the contact angle. When dividing the laser sheet into an array of laser rays by placing a mesh into the beam path, the shape of the meniscus can be reconstructed from the reflection image. We verified the method by measuring the receding contact angle versus speed for aqueous cetyltrimethyl ammonium bromide solutions on a smooth hydrophobized as well as on a rough polystyrene surface.
Measuring contact angle and meniscus shape with a reflected laser beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eibach, T. F.; Nguyen, H.; Butt, H. J.
2014-01-15
Side-view imaging of the contact angle between an extended planar solid surface and a liquid is problematic. Even when aligning the view perfectly parallel to the contact line, focusing one point of the contact line is not possible. We describe a new measurement technique for determining contact angles with the reflection of a widened laser sheet on a moving contact line. We verified this new technique measuring the contact angle on a cylinder, rotating partially immersed in a liquid. A laser sheet is inclined under an angle φ to the unperturbed liquid surface and is reflected off the meniscus. Collectedmore » on a screen, the reflection image contains information to determine the contact angle. When dividing the laser sheet into an array of laser rays by placing a mesh into the beam path, the shape of the meniscus can be reconstructed from the reflection image. We verified the method by measuring the receding contact angle versus speed for aqueous cetyltrimethyl ammonium bromide solutions on a smooth hydrophobized as well as on a rough polystyrene surface.« less
NASA Technical Reports Server (NTRS)
Tzortziou, Maria; Krotkov, Nickolay A.; Cede, Alexander; Herman, Jay R.; Vasilkov, Alexander
2008-01-01
This paper describes and applies a new technique for retrieving diurnal variability in tropospheric ozone vertical distribution using ground-based measurements of ultraviolet sky radiances. The measured radiances are obtained by a polarization-insensitive modified Brewer double spectrometer located at Goddard Space Flight Center, in Greenbelt, Maryland, USA. Results demonstrate that the Brewer angular (0-72deg viewing zenith angle) and spectral (303-320 nm) measurements of sky radiance in the solar principal plane provide sufficient information to derive tropospheric ozone diurnal variability. In addition, the Brewer measurements provide stratospheric ozone vertical distributions at least twice per day near sunrise and sunset. Frequent measurements of total column ozone amounts from direct-sun observations are used as constraints in the retrieval. The vertical ozone profile resolution is shown in terms of averaging kernels to yield at least four points in the troposphere-low stratosphere, including good information in Umkehr layer 0 (0-5 km). The focus of this paper is on the derivation of stratospheric and tropospheric ozone profiles using both simulated and measured radiances. We briefly discuss the necessary modifications of the Brewer spectrometer that were used to eliminate instrumental polarization sensitivity so that accurate sky radiances can be obtained in the presence of strong Rayleigh scattering and aerosols. The results demonstrate that including a site-specific and time-dependent aerosol correction, based on Brewer direct-sun observations of aerosol optical thickness, is critical to minimize the sky radiance residuals as a function of observing angle in the optimal estimation inversion algorithm and improve the accuracy of the retrieved ozone profile.
Detecting text in natural scenes with multi-level MSER and SWT
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Liu, Renjun
2018-04-01
The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.
NASA Technical Reports Server (NTRS)
Gatebe, C. K.; King, M. D.; Tsay, S.-C.; Ji, Q.
2000-01-01
Remote sensing of aerosol over land, from MODIS will be based on dark targets using mid-IR channels 2.1 and 3.9 micron. This approach was developed by Kaufman et al (1997), who suggested that dark surface reflectance in the red (0.66 micron -- rho(sub 0.66)) channel is half of that at 2.2 micron (rho(sub 2.2)), and the reflectance in the blue (0.49 micron - rho(sub 0.49)) channel is a quarter of that at 2.2 micron. Using this relationship, the surface reflectance in the visible channels can be predicted within Delta.rho(sub 0.49) approximately Delat.rho(sub 0.66) approximately 0.006 from rho(sub 2.2) for rho(sub 2.2) <= 0.10. This was half the error obtained using the 3.75 micron and corresponds to an error in aerosol optical thickness of Delat.tau approximately 0.06. These results, though applicable to several biomes (e.g. forests, and brighter lower canopies), have only been tested at one view angle - the nadir (theta = 0 deg). Considering the importance of the results in remote sensing of aerosols over land surfaces from space, we are validating the relationships for off-nadir view angles using Cloud Absorption Radiometer (CAR) data. The CAR data are available for channels between 0.3 and 2.3 micron and for different surface types and conditions: forest, tundra, ocean, sea-ice, swamp, grassland and over areas covered with smoke. In this study we analyzed data collected during the Smoke, Clouds, and Radiation - Brazil (SCAR-B) experiment to validate Kaufman et al.'s (1997) results for non-nadir view angles. We will show the correlation between rho(sub 0.472), rho(sub 0.675), and rho(sub 2.2) for view angles between nadir (0 deg) and 55 deg off-nadir, and for different viewing directions in the backscatter and forward scatter directions.
There is no bidirectional hot-spot in Sentinel-2 data
NASA Astrophysics Data System (ADS)
Li, Z.; Roy, D. P.; Zhang, H.
2017-12-01
The Sentinel-2 multi-spectral instrument (MSI) acquires reflective wavelength observations with directional effects due to surface reflectance anisotropy, often described by the bidirectional reflectance distribution function (BRDF). Recently, we quantified Sentinel-2A (S2A) BRDF effects for 20° × 10° of southern Africa sensed in January and in April 2016 and found maximum BRDF effects for the January data and at the western scan edge, i.e., in the back-scatter direction (Roy et al. 2017). The hot-spot is the term used to describe the increased directional reflectance that occurs over most surfaces when the solar and viewing directions coincide, and has been observed in wide-field of view data such as MODIS. Recently, we observed that Landsat data will not have a hot-spot because the global annual minimum solar zenith angle is more than twice the maximum view zenith angle (Zhang et al. 2016). This presentation examines if there is a S2A hot-spot which may be possible as it has a wider field of view (20.6°) and higher orbit (786 km) than Landsat. We examined a global year of S2A metadata extracted using the Committee on Earth Observation Satellite Visualization Environment (COVE) tool, computed the solar zenith angles in the acquisition corners, and ranked the acquisitions by the solar zenith angle in the back-scatter direction. The available image data for the 10 acquisitions with the smallest solar zenith angle over the year were ordered from the ESA and their geometries examined in detail. The acquisition closest to the hot-spot had a maximum scattering angle of 173.61° on its western edge (view zenith angle 11.91°, solar zenith angle 17.97°) and was acquired over 60.80°W 24.37°N on June 2nd 2016. Given that hot-spots are only apparent when the scattering angle is close to 180° we conclude from this global annual analysis that there is no hot-spot in Sentinel-2 data. Roy, D.P, Li, J., Zhang, H.K., Yan, L., Huang, H., Li, Z., 2017, Examination of Sentinel-2A multi-spectral instrument (MSI) reflectance anisotropy and the suitability of a general method to normalize MSI reflectance to nadir BRDF adjusted reflectance, RSE. 199, 25-38. Zhang, H. K., Roy, D.P., Kovalskyy, V., 2016, Optimal solar geometry definition for global long term Landsat time series bi-directional reflectance normalization, IEEE TGRS. 54(3), 1410-1418.
ERIC Educational Resources Information Center
Hsu, Wen-Chun; Shih, Ju-Ling
2016-01-01
In this study, to learn the routine of Tantui, a branch of martial arts was taken as an object of research. Fitts' stages of motor learning and augmented reality (AR) were applied to a 3D mobile-assisted learning system for martial arts, which was characterized by free viewing angles. With the new system, learners could rotate the viewing angle of…
Brain activation in parietal area during manipulation with a surgical robot simulator.
Miura, Satoshi; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G
2015-06-01
we present an evaluation method to qualify the embodiment caused by the physical difference between master-slave surgical robots by measuring the activation of the intraparietal sulcus in the user's brain activity during surgical robot manipulation. We show the change of embodiment based on the change of the optical axis-to-target view angle in the surgical simulator to change the manipulator's appearance in the monitor in terms of hand-eye coordination. The objective is to explore the change of brain activation according to the change of the optical axis-to-target view angle. In the experiments, we used a functional near-infrared spectroscopic topography (f-NIRS) brain imaging device to measure the brain activity of the seven subjects while they moved the hand controller to insert a curved needle into a target using the manipulator in a surgical simulator. The experiment was carried out several times with a variety of optical axis-to-target view angles. Some participants showed a significant peak (P value = 0.037, F-number = 2.841) when the optical axis-to-target view angle was 75°. The positional relationship between the manipulators and endoscope at 75° would be the closest to the human physical relationship between the hands and eyes.
Multiple incidence angle SIR-B experiment over Argentina Mapping of forest units
NASA Technical Reports Server (NTRS)
Cimino, J.; Casey, D.; Wall, S. D.; Brandani, A.; Rabassa, J.
1986-01-01
Multiple incidence angle SIR-B data of the Cordon la Grasa region of the Chubut Province of Argentina are used to discriminate various forest types by their relative brightness versus incidence angle signatures. The region consists of several species of Nothofagas which change in canopy structure with elevation, slope, and exposure. In general, the factors that appear to impact the radar response most are canopy structure, density, and ground cover (presence or absence of dead trunks and branches in particular). The results of this work indicate that (1) different forest species, and structures of a single species, may be discriminated using multiple incidence angle radar imagery and (2) it is essential to consider the variation in backscatter due to incidence angle when analyzing the comparing data collected at varying frequencies and polarizations.
5. VIEW OF FRONT (WEST AND SOUTH SIDES) TO NORTHEAST. ...
5. VIEW OF FRONT (WEST AND SOUTH SIDES) TO NORTHEAST. VIEW TO NORTHEAST. NOTE THAT LARGE TREES PREVENT MORE COMPLETE VIEW FROM BETTER ANGLE. FOR MORE COMPLETE VIEW, SEE PHOTOGRAPHIC COPY OF 1916 PHOTO, NO. ID-17-C-35. - Boise Project, Boise Project Office, 214 Broadway, Boise, Ada County, ID
Emission Patterns of Solar Type III Radio Bursts: Stereoscopic Observations
NASA Technical Reports Server (NTRS)
Thejappa, G.; MacDowall, R.; Bergamo, M.
2012-01-01
Simultaneous observations of solar type III radio bursts obtained by the STEREO A, B, and WIND spacecraft at low frequencies from different vantage points in the ecliptic plane are used to determine their directivity. The heliolongitudes of the sources of these bursts, estimated at different frequencies by assuming that they are located on the Parker spiral magnetic field lines emerging from the associated active regions into the spherically symmetric solar atmosphere, and the heliolongitudes of the spacecraft are used to estimate the viewing angle, which is the angle between the direction of the magnetic field at the source and the line connecting the source to the spacecraft. The normalized peak intensities at each spacecraft Rj = Ij /[Sigma]Ij (the subscript j corresponds to the spacecraft STEREO A, B, and WIND), which are defined as the directivity factors are determined using the time profiles of the type III bursts. It is shown that the distribution of the viewing angles divides the type III bursts into: (1) bursts emitting into a very narrow cone centered around the tangent to the magnetic field with angular width of approximately 2 deg and (2) bursts emitting into a wider cone with angular width spanning from [approx] -100 deg to approximately 100 deg. The plots of the directivity factors versus the viewing angles of the sources from all three spacecraft indicate that the type III emissions are very intense along the tangent to the spiral magnetic field lines at the source, and steadily fall as the viewing angles increase to higher values. The comparison of these emission patterns with the computed distributions of the ray trajectories indicate that the intense bursts visible in a narrow range of angles around the magnetic field directions probably are emitted in the fundamental mode, whereas the relatively weaker bursts visible to a wide range of angles are probably emitted in the harmonic mode.
Stocco, Antonio; Su, Ge; Nobili, Maurizio; In, Martin; Wang, Dayang
2014-09-28
Here multiple angle of incidence ellipsometry was successfully applied to in situ assess the contact angle and surface coverage of gold nanoparticles as small as 18 nm, coated with stimuli-responsive polymers, at water-oil and water-air interfaces in the presence of NaCl and NaOH, respectively. The interfacial adsorption of the nanoparticles was found to be very slow and took days to reach a fairly low surface coverage. For water-oil interfaces, in situ nanoparticle contact angles agree with the macroscopic equilibrium contact angles of planar gold surfaces with the same polymer coatings, whilst for water-air interfaces, significant differences have been observed.
59. VIEW FROM THE NORTHEAST IN THE NORTHEAST QUADRANT. GENERAL ...
59. VIEW FROM THE NORTHEAST IN THE NORTHEAST QUADRANT. GENERAL VIEW OF THE RIGHT FLANK WALL. RIGHT SHOULDER ANGLE IS INCLUDED ON THE RIGHT SIDE OF THE PHOTOGRAPH. - Fort Sumter, Charleston, Charleston County, SC
Atmospheric Science Data Center
2013-04-16
article title: Unique Views of a Shattered Ice Shelf View Larger Image ... views of the breakup of the northern section of the Larsen B ice shelf are shown in this image pair from the Multi-angle Imaging ...
WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDonald, L; Thomas, C; Syme, A
2016-06-15
Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depictingmore » the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial metastases, without significant safety concerns that hinder immediate clinical implementation.« less
Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo
2018-03-07
In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.
Voyager spacecraft images of Jupiter and Saturn
NASA Technical Reports Server (NTRS)
Birnbaum, M. M.
1982-01-01
The Voyager imaging system is described, noting that it is made up of a narrow-angle and a wide-angle TV camera, each in turn consisting of optics, a filter wheel and shutter assembly, a vidicon tube, and an electronics subsystem. The narrow-angle camera has a focal length of 1500 mm; its field of view is 0.42 deg and its focal ratio is f/8.5. For the wide-angle camera, the focal length is 200 mm, the field of view 3.2 deg, and the focal ratio of f/3.5. Images are exposed by each camera through one of eight filters in the filter wheel on the photoconductive surface of a magnetically focused and deflected vidicon having a diameter of 25 mm. The vidicon storage surface (target) is a selenium-sulfur film having an active area of 11.14 x 11.14 mm; it holds a frame consisting of 800 lines with 800 picture elements per line. Pictures of Jupiter, Saturn, and their moons are presented, with short descriptions given of the area being viewed.
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
Zhao, Zhizhen; Singer, Amit
2014-01-01
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations. PMID:24631969
2015-08-20
This view from NASA Cassini spacecraft looks toward Saturn icy moon Dione, with giant Saturn and its rings in the background, just prior to the mission final close approach to the moon on August 17, 2015. At lower right is the large, multi-ringed impact basin named Evander, which is about 220 miles (350 kilometers) wide. The canyons of Padua Chasma, features that form part of Dione's bright, wispy terrain, reach into the darkness at left. Imaging scientists combined nine visible light (clear spectral filter) images to create this mosaic view: eight from the narrow-angle camera and one from the wide-angle camera, which fills in an area at lower left. The scene is an orthographic projection centered on terrain at 0.2 degrees north latitude, 179 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. North on Dione is up. The view was acquired at distances ranging from approximately 106,000 miles (170,000 kilometers) to 39,000 miles (63,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 35 degrees. Image scale is about 1,500 feet (450 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19650
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
Perceived orientation, spatial layout and the geometry of pictures
NASA Technical Reports Server (NTRS)
Goldstein, E. Bruce
1989-01-01
The purpose is to discuss the role of geometry in determining the perception of spatial layout and perceived orientation in pictures viewed at an angle. This discussion derives from Cutting's (1988) suggestion, based on his analysis of some of the author's data (Goldstein, 1987), that the changes in perceived orientation that occur when pictures are viewed at an angle can be explained in terms of geometrically produced changes in the picture's virtual space.
Variation in spectral response of soybeans with respect to illumination, view, and canopy geometry
NASA Technical Reports Server (NTRS)
Ranson, K. J.; Biehl, L. L.; Bauer, M. E.
1984-01-01
Comparisons of the spectral response for incomplete (well-defined row structure) and complete (overlapping row structure) canopies of soybeans indicated a greater dependence on Sun and view geometry for the incomplete canopies. Red and near-IR reflectance for the incomplete canopy decreased as solar zenith angle increased for a nadir view angle until the soil between the plant rows was completely shaded. Thereafter for increasing solar zenith angle, the red reflectance leveled off and the near-IR reflectance increased. A 'hot spot' effect was evident for the red and near-IR reflectance factors. The 'hot spot' effect was more pronounced for the red band based on relative reflectance value changes. The ratios of off-nadir to nadir acquired data reveal that off-nadir red band reflectance factors more closely approximated straightdown measurements for time periods away from solar noon. Normalized difference generally approximated straightdown measurements during the middle portion of the day.
Effect of structured visual environments on apparent eye level.
Stoper, A E; Cohen, M M
1989-11-01
Each of 12 subjects set a binocularly viewed target to apparent eye level; the target was projected on the rear wall of an open box, the floor of which was horizontal or pitched up and down at angles of 7.5 degrees and 15 degrees. Settings of the target were systematically biased by 60% of the pitch angle when the interior of the box was illuminated, but by only 5% when the interior of the box was darkened. Within-subjects variability of the settings was less under illuminated viewing conditions than in the dark, but was independent of box pitch angle. In a second experiment, 11 subjects were tested with an illuminated pitched box, yielding biases of 53% and 49% for binocular and monocular viewing conditions, respectively. The results are discussed in terms of individual and interactive effects of optical, gravitational, and extraretinal eye-position information in determining judgements of eye level.
Image quality improvement in MDCT cardiac imaging via SMART-RECON method
NASA Astrophysics Data System (ADS)
Li, Yinsheng; Cao, Ximiao; Xing, Zhanfeng; Sun, Xuguang; Hsieh, Jiang; Chen, Guang-Hong
2017-03-01
Coronary CT angiography (CCTA) is a challenging imaging task currently limited by the achievable temporal resolution of modern Multi-Detector CT (MDCT) scanners. In this paper, the recently proposed SMARTRECON method has been applied in MDCT-based CCTA imaging to improve the image quality without any prior knowledge of cardiac motion. After the prospective ECG-gated data acquisition from a short-scan angular span, the acquired data were sorted into several sub-sectors of view angles; each corresponds to a 1/4th of the short-scan angular range. Information of the cardiac motion was thus encoded into the data in each view angle sub-sector. The SMART-RECON algorithm was then applied to jointly reconstruct several image volumes, each of which is temporally consistent with the data acquired in the corresponding view angle sub-sector. Extensive numerical simulations were performed to validate the proposed technique and investigate the performance dependence.
Soybean canopy reflectance as a function of view and illumination geometry
NASA Technical Reports Server (NTRS)
Ranson, K. J.; Vanderbilt, V. C.; Biehl, L. L.; Robinson, B. F.; Bauer, M. E.
1981-01-01
Reflectances were calculated from measurements at four wavelength bands through eight view azimuth and seven view zenith directions, for various solar zenith and azimuth angles over portions of three days, in an experimental characterization of a soybean field by means of its reflectances and physical and agronomic attributes. Results indicate that the distribution of reflectance from a soybean field is a function of the solar illumination and viewing geometry, wavelength, and row direction, as well as the state of canopy development. Shadows between rows were found to affect visible wavelength band reflectance to a greater extent than near-IR reflectance. A model describing reflectance variation as a function of projected solar and viewing angles is proposed, which approximates the visible wavelength band reflectance variations of a canopy with a well-defined row structure.
Enceladus' 101 Geysers: Phantoms? Hardly
NASA Astrophysics Data System (ADS)
Porco, C.; Nimmo, F.; DiNino, D.
2015-12-01
The discovery by the Cassini mission of present-day geysering activity capping the southern hemisphere of Saturn's moon Enceladus (eg, Porco, C. C. et al. Science 311, 1393, 2006) and sourced within a subsurface body of liquid water (eg, Postberg, F. et al. Nature 459, 1098, 2009; Porco, C.C. et al. AJ 148, 45, 2014, hereafter PEA], laced with organic compounds (eg, Waite, J.H. et al. Science 311, 1419, 2006), has been a significant one, with far-reaching astrobiological implications. In an extensive Cassini imaging survey of the moon's south polar terrain (SPT), PEA identified 101 distinct, narrow jets of small icy particles erupting, with varying strengths, from the four major fractures crossing the SPT. A sufficient spread in stereo angles of the 107 images used in that work allowed (in some cases, many) pair-wise triangulations to be computed; precise surface locations were derived for 98 jets. Recently, it has been claimed (Spitale, J.N. et al. Nature 521, 57, 2015) that the majority of the geysers are not true discrete jets, but are "phantoms" that appear in shallow-angle views of a dense continuous curtain of material with acute bends in it. These authors also concluded that the majority of the eruptive material is not in the form of jets but in the form of fissure-style 'curtain' eruptions. We argue below the contrary, that because almost all the moon's geysers were identified by PEA using multiple images with favorable viewing geometries, the vast majority of them, and likely all, are discrete jets. Specifically, out of 98 jets, no fewer than 90 to 95 were identified with viewing geometries that preclude the appearance of phantoms. How the erupting solids (i.e., icy particles) that are seen in Cassini images are partitioned between jets and inter-jet curtains is still an open question.
Li, Xiaowei; Huang, Lingling; Tan, Qiaofeng; Bai, Benfeng; Jin, Guofan
2011-03-28
A semi-circular plasmonic launcher integrated with dielectric-loaded surface plasmon-polaritons waveguide (DLSPPW) is proposed and analyzed theoretically, which can focus and efficiently couple the excited surface plasmon polaritons (SPPs) into the DLSPPW via the highly matched spatial field distribution with the waveguide mode in the focal plane. By tuning the incident angle or polarization of the illuminating beam, it is shown that the launcher may be conveniently used as a switch or a multiplexer that have potential applications in plasmonic circuitry. Furthermore, from an applicational point of view, it is analyzed how the coupling performance of the launcher can be further improved by employing multiple semi-circular slits.
NASA Technical Reports Server (NTRS)
Bhatt, Rajendra; Doelling, David R.; Angal, Amit; Xiong, Xiaoxiong; Scarino, Benjamin; Gopalan, Arun; Haney, Conor; Wu, Aisheng
2017-01-01
MODIS consists of a cross-track, two-sided scan mirror, whose reflectance is not uniform but is a function of angle of incidence (AOI). This feature, known as response versusscan-angle (RVS), was characterized for all reflective solar bands of both MODIS instruments prior to launch. The RVS characteristic has changed on orbit, which must be tracked precisely over time to ensure the quality of MODIS products. The MODIS characterization support team utilizes the onboard calibrators and the earth view responses from multiple pseudo invariant desert sites to track the RVS changes at different AOIs. The drawback of using deserts is the assumption that these sites are radiometrically stable during the monitoring period. In addition, the 16-day orbit repeat cycle of MODIS allows for only a limited set of AOIs over a given desert. We propose a novel and robust approach of characterizing the MODIS RVS using tropical deep convective clouds (DCC). The method tracks the monthly DCC response at specified sets of AOIs to compute the temporal RVS changes. Initial results have shown that the Aqua-MODIS collection 6 band 1 level 1B radiances show considerable residual RVS dependencies, with long-term drifts up to 2.3 at certain AOIs.
3D bubble reconstruction using multiple cameras and space carving method
NASA Astrophysics Data System (ADS)
Fu, Yucheng; Liu, Yang
2018-07-01
An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm × 1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.
NASA Astrophysics Data System (ADS)
Bhatt, Rajendra; Doelling, David R.; Angal, Amit; Xiong, Xiaoxiong; Scarino, Benjamin; Gopalan, Arun; Haney, Conor; Wu, Aisheng
2017-01-01
MODIS consists of a cross-track, two-sided scan mirror, whose reflectance is not uniform but is a function of angle of incidence (AOI). This feature, known as response versus scan-angle (RVS), was characterized for all reflective solar bands of both MODIS instruments prior to launch. The RVS characteristic has changed on orbit, which must be tracked precisely over time to ensure the quality of MODIS products. The MODIS characterization support team utilizes the onboard calibrators and the earth view responses from multiple pseudoinvariant desert sites to track the RVS changes at different AOIs. The drawback of using deserts is the assumption that these sites are radiometrically stable during the monitoring period. In addition, the 16-day orbit repeat cycle of MODIS allows for only a limited set of AOIs over a given desert. We propose a novel and robust approach of characterizing the MODIS RVS using tropical deep convective clouds (DCC). The method tracks the monthly DCC response at specified sets of AOIs to compute the temporal RVS changes. Initial results have shown that the Aqua-MODIS collection 6 band 1 level 1B radiances show considerable residual RVS dependencies, with long-term drifts up to 2.3% at certain AOIs.
64. VIEW FROM THE NORTHEAST IN THE NORTHEAST QUADRANT. DETAIL ...
64. VIEW FROM THE NORTHEAST IN THE NORTHEAST QUADRANT. DETAIL VIEW OF THE RIGHT FACE. A PORTION OF THE RIGHT SHOULDER ANGLE IS INCLUDED ON THE LEFT-SIDE OF THE IMAGE, WITH SCALE. - Fort Sumter, Charleston, Charleston County, SC
Alar-columellar and lateral nostril changes following tongue-in-groove rhinoplasty.
Shah, Ajul; Pfaff, Miles; Kinsman, Gianna; Steinbacher, Derek M
2015-04-01
Repositioning the medial crura cephalically onto the caudal septum (tongue-in-groove; TIG) allows alteration of the columella, ala, and nasal tip to address alar-columellar disproportion as seen from the lateral view. To date, quantitative analysis of nostril dimension, alar-columellar relationship, and nasal tip changes following the TIG rhinoplasty technique have not been described. The present study aims to evaluate post-operative lateral morphometric changes following TIG. Pre- and post-operative lateral views of a series of consecutive patients who underwent TIG rhinoplasty were produced from 3D images at multiple time points (≤2 weeks, 4-10 weeks, and >10 weeks post-operatively) for analysis. The 3D images were converted to 2D and set to scale. Exposed lateral nostril area, alar-columellar disproportion (divided into superior and inferior heights), nasolabial angle, nostril height, and nostril length were calculated and statistically analyzed using a pairwise t test. A P ≤ 0.05 was considered statistically significant. Ninety-four lateral views were analyzed from 20 patients (16 females; median age: 31.8). One patient had a history of current tobacco cigarette use. Lateral nostril area decreased at all time points post-operatively, in a statistically significant fashion. Alar-columellar disproportion was reduced following TIG at all time points. The nasolabial angle significantly increased post-operatively at ≤2 weeks, 4-10 weeks, and >10, all in a statistically significant fashion. Nostril height and nostril length decreased at all post-operative time points. Morphometric analysis reveals reduction in alar-columellar disproportion and lateral nostril shows following TIG rhinoplasty. Tip rotation, as a function of nasolabial angle, also increased. These results provide quantitative substantiation for qualitative descriptions attributed to the TIG technique. Future studies will focus on area and volumetric measurements, and assessment of long-term stability. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
He, Dan; Kuhn, David; Parida, Laxmi
2016-06-15
Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. dhe@us.ibm.com. © The Author 2016. Published by Oxford University Press.
Touch-screen tablet user configurations and case-supported tilt affect head and neck flexion angles.
Young, Justin G; Trudeau, Matthieu; Odell, Dan; Marinelli, Kim; Dennerlein, Jack T
2012-01-01
The aim of this study was to determine how head and neck postures vary when using two media tablet (slate) computers in four common user configurations. Fifteen experienced media tablet users completed a set of simulated tasks with two media tablets in four typical user configurations. The four configurations were: on the lap and held with the user's hands, on the lap and in a case, on a table and in a case, and on a table and in a case set at a high angle for watching movies. An infra-red LED marker based motion analysis system measured head/neck postures. Head and neck flexion significantly varied across the four configurations and across the two tablets tested. Head and neck flexion angles during tablet use were greater, in general, than angles previously reported for desktop and notebook computing. Postural differences between tablets were driven by case designs, which provided significantly different tilt angles, while postural differences between configurations were driven by gaze and viewing angles. Head and neck posture during tablet computing can be improved by placing the tablet higher to avoid low gaze angles (i.e. on a table rather than on the lap) and through the use of a case that provides optimal viewing angles.
Large size three-dimensional video by electronic holography using multiple spatial light modulators
Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori
2014-01-01
In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees. PMID:25146685
Large size three-dimensional video by electronic holography using multiple spatial light modulators.
Sasaki, Hisayuki; Yamamoto, Kenji; Wakunami, Koki; Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori
2014-08-22
In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.
Photographic measurement of head and cervical posture when viewing mobile phone: a pilot study.
Guan, Xiaofei; Fan, Guoxin; Wu, Xinbo; Zeng, Ying; Su, Hang; Gu, Guangfei; Zhou, Qi; Gu, Xin; Zhang, Hailong; He, Shisheng
2015-12-01
With the dramatic growth of mobile phone usage, concerns have been raised with regard to the adverse health effects of mobile phone on spinal posture. The aim of this study was to determine the head and cervical postures by photogrammetry when viewing the mobile phone screen, compared with those in neutral standing posture. A total of 186 subjects (81 females and 105 males) aged from 17 to 31 years old participated in this study. Subjects were instructed to stand neutrally and using mobile phone as in daily life. Using a photographic method, the sagittal head and cervical postures were assessed by head tilt angle, neck tilt angle, forward head shift and gaze angle. The photographic method showed a high intra-rater and inter-rater reliability in measuring the sagittal posture of cervical spine and gaze angle (ICCs ranged from 0.80 to 0.99). When looking at mobile phone, the head tilt angle significantly increased (from 74.55° to 95.22°, p = 0.000) and the neck angle decreased (from 54.68° to 38.77°, p = 0.000). The forward head posture was also confirmed by the significantly increased head shift (from 10.90 to 13.85 cm, p = 0.000). The posture assumed in mobile phone use was significantly correlated with neutral posture (p < 0.05). Males displayed a more forward head posture than females (p < 0.05). The head tilt angle was positively correlated with the gaze angle (r = 0.616, p = 0.000), while the neck tilt angle was negatively correlated with the gaze angle (r = -0.628, p = 0.000). Photogrammetry is a reliable, quantitative method to evaluate the head and cervical posture during mobile phone use. Compared to neutral standing, subjects display a more forward head posture when viewing the mobile phone screen, which is correlated with neutral posture, gaze angle and gender. Future studies will be needed to investigate a dose-response relationship between mobile phone use and assumed posture.
Wide-angle Optical Telescope for the EUSO Experiments
NASA Technical Reports Server (NTRS)
Hillman, L. W.; Takahaski, Y.; Zuccaro, A.; Lamb, D.; Pitalo, K.; Lopado, A.; Keys, A.
2003-01-01
Future spacebased air shower experiments, including the planned Extreme Universe Space Observatory (EUSO) mission, require a wide-angle telescope in the near-UV wavelengths 330 - 400 nm. Widest possible target aperture of earth's atmosphere, such as greater than 10(exp 5) square kilometers sr, can be viewed within the field-of-view of 30 degrees from space. EUSO's optical design is required to be compact, being constrained by the allocated mass and diameter for use in space. Two doublesided Fresnel lenses with 2.5-m diameter are chosen for the baseline design. It satisfies the imaging resolution of 0.1 degree over the 30-degree field of view.
NASA Astrophysics Data System (ADS)
Roosjen, Peter P. J.; Brede, Benjamin; Suomalainen, Juha M.; Bartholomeus, Harm M.; Kooistra, Lammert; Clevers, Jan G. P. W.
2018-04-01
In addition to single-angle reflectance data, multi-angular observations can be used as an additional information source for the retrieval of properties of an observed target surface. In this paper, we studied the potential of multi-angular reflectance data for the improvement of leaf area index (LAI) and leaf chlorophyll content (LCC) estimation by numerical inversion of the PROSAIL model. The potential for improvement of LAI and LCC was evaluated for both measured data and simulated data. The measured data was collected on 19 July 2016 by a frame-camera mounted on an unmanned aerial vehicle (UAV) over a potato field, where eight experimental plots of 30 × 30 m were designed with different fertilization levels. Dozens of viewing angles, covering the hemisphere up to around 30° from nadir, were obtained by a large forward and sideways overlap of collected images. Simultaneously to the UAV flight, in situ measurements of LAI and LCC were performed. Inversion of the PROSAIL model was done based on nadir data and based on multi-angular data collected by the UAV. Inversion based on the multi-angular data performed slightly better than inversion based on nadir data, indicated by the decrease in RMSE from 0.70 to 0.65 m2/m2 for the estimation of LAI, and from 17.35 to 17.29 μg/cm2 for the estimation of LCC, when nadir data were used and when multi-angular data were used, respectively. In addition to inversions based on measured data, we simulated several datasets at different multi-angular configurations and compared the accuracy of the inversions of these datasets with the inversion based on data simulated at nadir position. In general, the results based on simulated (synthetic) data indicated that when more viewing angles, more well distributed viewing angles, and viewing angles up to larger zenith angles were available for inversion, the most accurate estimations were obtained. Interestingly, when using spectra simulated at multi-angular sampling configurations as were captured by the UAV platform (view zenith angles up to 30°), already a huge improvement could be obtained when compared to solely using spectra simulated at nadir position. The results of this study show that the estimation of LAI and LCC by numerical inversion of the PROSAIL model can be improved when multi-angular observations are introduced. However, for the potato crop, PROSAIL inversion for measured data only showed moderate accuracy and slight improvements.
NASA Astrophysics Data System (ADS)
Fioretti, Valentina; Mineo, Teresa; Bulgarelli, Andrea; Dondero, Paolo; Ivanchenko, Vladimir; Lei, Fan; Lotti, Simone; Macculi, Claudio; Mantero, Alfonso
2017-12-01
Low energy protons (< 300 keV) can enter the field of view of X-ray telescopes, scatter on their mirror surfaces at small incident angles, and deposit energy on the detector. This phenomenon can cause intense background flares at the focal plane decreasing the mission observing time (e.g. the XMM-Newton mission) or in the most extreme cases, damaging the X-ray detector. A correct modelization of the physics process responsible for the grazing angle scattering processes is mandatory to evaluate the impact of such events on the performance (e.g. observation time, sensitivity) of future X-ray telescopes as the ESA ATHENA mission. The Remizovich model describes particles reflected by solids at glancing angles in terms of the Boltzmann transport equation using the diffuse approximation and the model of continuous slowing down in energy. For the first time this solution, in the approximation of no energy losses, is implemented, verified, and qualitatively validated on top of the Geant4 release 10.2, with the possibility to add a constant energy loss to each interaction. This implementation is verified by comparing the simulated proton distribution to both the theoretical probability distribution and with independent ray-tracing simulations. Both the new scattering physics and the Coulomb scattering already built in the official Geant4 distribution are used to reproduce the latest experimental results on grazing angle proton scattering. At 250 keV multiple scattering delivers large proton angles and it is not consistent with the observation. Among the tested models, the single scattering seems to better reproduce the scattering efficiency at the three energies but energy loss obtained at small scattering angles is significantly lower than the experimental values. In general, the energy losses obtained in the experiment are higher than what obtained by the simulation. The experimental data are not completely representative of the soft proton scattering experienced by current X-ray telescopes because of the lack of measurements at low energies (< 200 keV) and small reflection angles, so we are not able to address any of the tested models as the one that can certainly reproduce the scattering behavior of low energy protons expected for the ATHENA mission. We can, however, discard multiple scattering as the model able to reproduce soft proton funnelling, and affirm that Coulomb single scattering can represent, until further measurements at lower energies are available, the best approximation of the proton scattered angular distribution at the exit of X-ray optics.
Holographic elements and curved slit used to enlarge field of view in rocket detection system
NASA Astrophysics Data System (ADS)
Breton, Mélanie; Fortin, Jean; Lessard, Roger A.; Châteauneuf, Marc
2006-09-01
Rocket detection over a wide field of view is an important issue in the protection of light armored vehicle. Traditionally, the detection occurs in UV band, but recent studies have shown the existence of significant emission peaks in the visible and near infrared at rocket launch time. The use of the visible region is interesting in order to reduce the weight and cost of systems. Current methods to detect those specific peaks involve use of interferometric filters. However, they fail to combine wide angle with wavelength selectivity. A linear array of volume holographic elements combined with a curved exit slit is proposed for the development of a wide field of view sensor for the detection of solid propellant motor launch flash. The sensor is envisaged to trigger an active protection system. On the basis of geometric theory, a system has been designed. It consists of a collector, a linear array of holographic elements, a curved slit and a detector. The collector is an off-axis parabolic mirror. Holographic elements are recorded subdividing a hologram film in regions, each individually exposed with a different incidence angle. All regions have a common diffraction angle. The incident angle determines the instantaneous field of view of the elements. The volume hologram performs the function of separating and focusing the diffracted beam on an image plane to achieve wavelength filtering. Conical diffraction property is used to enlarge the field of view in elevation. A curved slit was designed to correspond to oblique incidence of the holographic linear array. It is situated at the image plane and filters the diffracted spectrum toward the sensor. The field of view of the design was calculated to be 34 degrees. This was validated by a prototype tested during a field trial. Results are presented and analyzed. The system succeeded in detecting the rocket launch flash at desired fields of view.
Digital mammography: comparative performance of color LCD and monochrome CRT displays.
Samei, Ehsan; Poolla, Ananth; Ulissey, Michael J; Lewin, John M
2007-05-01
To evaluate the comparative performance of high-fidelity liquid crystal display (LCD) and cathode ray tube (CRT) devices for mammography applications, and to assess the impact of LCD viewing angle on detection accuracy. Ninety 1 k x 1 k images were selected from a database of digital mammograms: 30 without any abnormality present, 30 with subtle masses, and 30 with subtle microcalcifications. The images were used with waived informed consent, Health Insurance Portability and Accountability Act compliance, and Institutional Review Board approval. With postprocessing presentation identical to those of the commercial mammography system used, 1 k x 1 k sections of images were viewed on a monochrome CRT and a color LCD in native grayscale, and with a grayscale representative of images viewed from a 30 degrees or 50 degrees off-normal viewing angle. Randomized images were independently scored by four experienced breast radiologists for the presence of lesions using a 0-100 grading scale. To compare diagnostic performance of the display modes, observer scores were analyzed using receiver operating characteristic (ROC) and analysis of variance. For masses and microcalcifications, the detection rate in terms of the area under the ROC curve (A(z)) showed a 2% increase and a 4% decrease from CRT to LCD, respectively. However, differences were not statistically significant (P > .05). The viewing angle data showed better microcalcification detection but lower mass detection at 30 degrees viewing orientation. The overall results varied notably from observer to observer yielding no statistically discernible trends across all observers, suggesting that within the 0-50 degrees viewing angle range and in a controlled observer experiment, the variation in the contrast response of the LCD has little or no impact on the detection of mammographic lesions. Although CRTs and LCDs differ in terms of angular response, resolution, noise, and color, these characteristics seem to have little influence on the detection of mammographic lesions. The results suggest comparable performance in clinical applications of the two devices.
Site selection and directional models of deserts used for ERBE validation targets
NASA Technical Reports Server (NTRS)
Staylor, W. F.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus 7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara, Gibson, and Saudi Deserts. These deserts will serve as in-flight validation targets for the Earth Radiation Budget Experiment being flown on the Earth Radiation Budget Satellite and two National Oceanic and Atmospheric Administration polar satellites. The directional reflectance model derived for the deserts was a function of the sum and product of the cosines of the solar and viewing zenith angles, and thus reciprocity existed between these zenith angles. The emittance model was related by a power law of the cosine of the viewing zenith angle.
2015-08-20
NASA Cassini spacecraft captured this parting view showing the rough and icy crescent of Saturn moon Dione following the spacecraft last close flyby of the moon on Aug. 17, 2015. Cassini obtained a similar crescent view in 2005 (see PIA07745). The earlier view has an image scale about four times higher, but does not show the moon's full crescent as this view does. Five visible light (clear spectral filter), narrow-angle camera images were combined to create this mosaic view. The scene is an orthographic projection centered on terrain at 0.4 degrees north latitude, 30.6 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. The view was acquired at distances ranging from approximately 37,000 miles (59,000 kilometers) to 47,000 miles (75,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 145 degrees. Image scale is about 1,300 feet (400 meters) per pixel. North on Dione is up and rotated 34 degrees to the right. http://photojournal.jpl.nasa.gov/catalog/PIA19649
Estimation of Finger Joint Angles Based on Electromechanical Sensing of Wrist Shape.
Kawaguchi, Junki; Yoshimoto, Shunsuke; Kuroda, Yoshihiro; Oshiro, Osamu
2017-09-01
An approach to finger motion capture that places fewer restrictions on the usage environment and actions of the user is an important research topic in biomechanics and human-computer interaction. We proposed a system that electrically detects finger motion from the associated deformation of the wrist and estimates the finger joint angles using multiple regression models. A wrist-mounted sensing device with 16 electrodes detects deformation of the wrist from changes in electrical contact resistance at the skin. In this study, we experimentally investigated the accuracy of finger joint angle estimation, the adequacy of two multiple regression models, and the resolution of the estimation of total finger joint angles. In experiments, both the finger joint angles and the system output voltage were recorded as subjects performed flexion/extension of the fingers. These data were used for calibration using the least-squares method. The system was found to be capable of estimating the total finger joint angle with a root-mean-square error of 29-34 degrees. A multiple regression model with a second-order polynomial basis function was shown to be suitable for the estimation of all total finger joint angles, but not those of the thumb.
Atmospheric Science Data Center
2013-04-16
... Gujarat), and in areas close to the earthquake epicenter. Research uses the unique capabilities of the Multi-angle Imaging ... Indo-Pakistani border, which were not easily accessible to survey teams on the ground. Changes in reflection at different view angles ...
Airborne Laser Polar Nephelometer
NASA Technical Reports Server (NTRS)
Grams, Gerald W.
1973-01-01
A polar nephelometer has been developed at NCAR to measure the angular variation of the intensity of light scattered by air molecules and particles. The system has been designed for airborne measurements using outside air ducted through a 5-cm diameter airflow tube; the sample volume is that which is common to the intersection of a collimated source beam and the detector field of view within the airflow tube. The source is a linearly polarized helium-neon laser beam. The optical system defines a collimated field-of-view (0.5deg half-angle) through a series of diaphragms located behind a I72-mm focal length objective lens. A photomultiplier tube is located immediately behind an aperture in the focal plane of the objective lens. The laser beam is mechanically chopped (on-off) at a rate of 5 Hz; a two-channel pulse counter, synchronized to the laser output, measures the photomultiplier pulse rate with the light beam both on and off. The difference in these measured pulse rates is directly proportional to the intensity of the scattered light from the volume common to the intersection of the laser beam and the detector field-of-view. Measurements can be made at scattering angles from 15deg to 165deg with reference to the direction of propagation of the light beam. Intermediate angles are obtained by selecting the angular increments desired between these extreme angles (any multiple of 0.1deg can be selected for the angular increment; 5deg is used in normal operation). Pulses provided by digital circuits control a stepping motor which sequentially rotates the detector by pre-selected angular increments. The synchronous photon-counting system automatically begins measurement of the scattered-light intensity immediately after the rotation to a new angle has been completed. The instrument has been flown on the NASA Convair 990 airborne laboratory to obtain data on the complex index of refraction of atmospheric aerosols. A particle impaction device is operated simultaneously to collect particles from the same airflow tube used to make the scattered-light measurements. A size distribution function is obtained by analysis of the particles collected by the impaction device. Calculated values of the angular variation of the scattered-light intensity are obtained by applying Mie scattering theory to the observed size distribution function and assuming different values of the complex index of refraction of the particles. The calculated values are then compared with data on the actual variation of the scattered-light intensity obtained with the polar nephelometer. The most probable value of the complex refractive index is that which provides the best fit between the experimental light scattering data and the parameters calculated from the observed size distribution function.
2013-12-23
The globe of Saturn, seen here in natural color, is reminiscent of a holiday ornament in this wide-angle view from NASA's Cassini spacecraft. The characteristic hexagonal shape of Saturn's northern jet stream, somewhat yellow here, is visible. At the pole lies a Saturnian version of a high-speed hurricane, eye and all. This view is centered on terrain at 75 degrees north latitude, 120 degrees west longitude. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The images were taken with the Cassini spacecraft wide-angle camera on July 22, 2013. This view was acquired at a distance of approximately 611,000 miles (984,000 kilometers) from Saturn. Image scale is 51 miles (82 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA17175
Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.
Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John
2018-01-01
In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.
Denize, Erin Stewart; McDonald, Fraser; Sherriff, Martyn
2014-01-01
Objective To evaluate the relative importance of bilabial prominence in relation to other facial profile parameters in a normal population. Methods Profile stimulus images of 38 individuals (28 female and 10 male; ages 19-25 years) were shown to an unrelated group of first-year students (n = 42; ages 18-24 years). The images were individually viewed on a 17-inch monitor. The observers received standardized instructions before viewing. A six-question questionnaire was completed using a Likert-type scale. The responses were analyzed by ordered logistic regression to identify associations between profile characteristics and observer preferences. The Bayesian Information Criterion was used to select variables that explained observer preferences most accurately. Results Nasal, bilabial, and chin prominences; the nasofrontal angle; and lip curls had the greatest effect on overall profile attractiveness perceptions. The lip-chin-throat angle and upper lip curl had the greatest effect on forehead prominence perceptions. The bilabial prominence, nasolabial angle (particularly the lower component), and mentolabial angle had the greatest effect on nasal prominence perceptions. The bilabial prominence, nasolabial angle, chin prominence, and submental length had the greatest effect on lip prominence perceptions. The bilabial prominence, nasolabial angle, mentolabial angle, and submental length had the greatest effect on chin prominence perceptions. Conclusions More prominent lips, within normal limits, may be considered more attractive in the profile view. Profile parameters have a greater influence on their neighboring aesthetic units but indirectly influence related profile parameters, endorsing the importance of achieving an aesthetic balance between relative prominences of all aesthetic units of the facial profile. PMID:25133133
The solid angle hidden in polyhedron gravitation formulations
NASA Astrophysics Data System (ADS)
Werner, Robert A.
2017-03-01
Formulas of a homogeneous polyhedron's gravitational potential typically include two arctangent terms for every edge of every face and a special term to eliminate a possible facial singularity. However, the arctangent and singularity terms are equivalent to the face's solid angle viewed from the field point. A face's solid angle can be evaluated with a single arctangent, saving computation.
1986-08-01
CHARACTERISTICS OF CRU.CIFORM MISSILES INCLUDING EFFECTS OF ROLL ANGLE AND CONTROL DEFLECTION N by Daniel J. Lesieutre Michael R. Mendenhall Susana M. Nazario...ANGLE AND CONTROL DEFLECTION Daniel J. Lesieutre Michael R. Mendenhal. Susana M. Nazario Nielsen Engineering & Research, Inc.00 Mountain View, CA 94043...Lo PREDICTION OF THE AERODYNAMIC CHARACTERISTICS OF CRU.CIFORM MISSILES - INCLUDING EFFECTS OF ROLL ANGLE AND CONTROL DEFLECTION by Daniel J
View Angle Effects on MODIS Snow Mapping in Forests
NASA Technical Reports Server (NTRS)
Xin, Qinchuan; Woodcock, Curtis E.; Liu, Jicheng; Tan, Bin; Melloh, Rae A.; Davis, Robert E.
2012-01-01
Binary snow maps and fractional snow cover data are provided routinely from MODIS (Moderate Resolution Imaging Spectroradiometer). This paper investigates how the wide observation angles of MODIS influence the current snow mapping algorithm in forested areas. Theoretical modeling results indicate that large view zenith angles (VZA) can lead to underestimation of fractional snow cover (FSC) by reducing the amount of the ground surface that is viewable through forest canopies, and by increasing uncertainties during the gridding of MODIS data. At the end of the MODIS scan line, the total modeled error can be as much as 50% for FSC. Empirical analysis of MODIS/Terra snow products in four forest sites shows high fluctuation in FSC estimates on consecutive days. In addition, the normalized difference snow index (NDSI) values, which are the primary input to the MODIS snow mapping algorithms, decrease as VZA increases at the site level. At the pixel level, NDSI values have higher variances, and are correlated with the normalized difference vegetation index (NDVI) in snow covered forests. These findings are consistent with our modeled results, and imply that consideration of view angle effects could improve MODIS snow monitoring in forested areas.
Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing
NASA Technical Reports Server (NTRS)
Pitone, D. S.; Klein, J. R.
1989-01-01
Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the large-angle pointing performance.
Analysis of the flight dynamics of the Solar Maximum Mission (SMM) off-sun scientific pointing
NASA Technical Reports Server (NTRS)
Pitone, D. S.; Klein, J. R.; Twambly, B. J.
1990-01-01
Algorithms are presented which were created and implemented by the Goddard Space Flight Center's (GSFC's) Solar Maximum Mission (SMM) attitude operations team to support large-angle spacecraft pointing at scientific objectives. The mission objective of the post-repair SMM satellite was to study solar phenomena. However, because the scientific instruments, such as the Coronagraph/Polarimeter (CP) and the Hard X-ray Burst Spectrometer (HXRBS), were able to view objects other than the Sun, attitude operations support for attitude pointing at large angles from the nominal solar-pointing attitudes was required. Subsequently, attitude support for SMM was provided for scientific objectives such as Comet Halley, Supernova 1987A, Cygnus X-1, and the Crab Nebula. In addition, the analysis was extended to include the reverse problem, computing the right ascension and declination of a body given the off-Sun angles. This analysis led to the computation of the orbits of seven new solar comets seen in the field-of-view (FOV) of the CP. The activities necessary to meet these large-angle attitude-pointing sequences, such as slew sequence planning, viewing-period prediction, and tracking-bias computation are described. Analysis is presented for the computation of maneuvers and pointing parameters relative to the SMM-unique, Sun-centered reference frame. Finally, science data and independent attitude solutions are used to evaluate the larg-angle pointing performance.
Multi-Angle View of the Canary Islands
NASA Technical Reports Server (NTRS)
2000-01-01
A multi-angle view of the Canary Islands in a dust storm, 29 February 2000. At left is a true-color image taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. This image was captured by the MISR camera looking at a 70.5-degree angle to the surface, ahead of the spacecraft. The middle image was taken by the MISR downward-looking (nadir) camera, and the right image is from the aftward 70.5-degree camera. The images are reproduced using the same radiometric scale, so variations in brightness, color, and contrast represent true variations in surface and atmospheric reflectance with angle. Windblown dust from the Sahara Desert is apparent in all three images, and is much brighter in the oblique views. This illustrates how MISR's oblique imaging capability makes the instrument a sensitive detector of dust and other particles in the atmosphere. Data for all channels are presented in a Space Oblique Mercator map projection to facilitate their co-registration. The images are about 400 km (250 miles)wide, with a spatial resolution of about 1.1 kilometers (1,200 yards). North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.
Atmospheric Science Data Center
2014-05-15
... View Larger Image Multi-angle views of the Appalachian Mountains, March 6, 2000 . ... Center Atmospheric Science Data Center in Hampton, VA. Photo credit: NASA/GSFC/LaRC/JPL, MISR Science Team Other formats ...
Eyjafjallajökull Ash Plume Particle Properties
2010-04-21
As NASA Terra satellite flew over Iceland erupting Eyjafjallajökull volcano, its Multi-angle Imaging SpectroRadiometer instrument acquired 36 near-simultaneous images of the ash plume, covering nine view angles in each of four wavelengths.
Three paths toward the quantum angle operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl
2016-12-15
We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less
Reproducing the hierarchy of disorder for Morpho-inspired, broad-angle color reflection
NASA Astrophysics Data System (ADS)
Song, Bokwang; Johansen, Villads Egede; Sigmund, Ole; Shin, Jung H.
2017-04-01
The scales of Morpho butterflies are covered with intricate, hierarchical ridge structures that produce a bright, blue reflection that remains stable across wide viewing angles. This effect has been researched extensively, and much understanding has been achieved using modeling that has focused on the positional disorder among the identical, multilayered ridges as the critical factor for producing angular independent color. Realizing such positional disorder of identical nanostructures is difficult, which in turn has limited experimental verification of different physical mechanisms that have been proposed. In this paper, we suggest an alternative model of inter-structural disorder that can achieve the same broad-angle color reflection, and is applicable to wafer-scale fabrication using conventional thin film technologies. Fabrication of a thin film that produces pure, stable blue across a viewing angle of more than 120 ° is demonstrated, together with a robust, conformal color coating.
Description of a landing site indicator (LASI) for light aircraft operation
NASA Technical Reports Server (NTRS)
Fuller, H. V.; Outlaw, B. K. E.
1976-01-01
An experimental cockpit mounted head-up type display system was developed and evaluated by LaRC pilots during the landing phase of light aircraft operations. The Landing Site Indicator (LASI) system display consists of angle of attack, angle of sideslip, and indicated airspeed images superimposed on the pilot's view through the windshield. The information is made visible to the pilot by means of a partially reflective viewing screen which is suspended directly in frot of the pilot's eyes. Synchro transmitters are operated by vanes, located at the left wing tip, which sense angle of attack and sideslip angle. Information is presented near the center of the display in the form of a moving index on a fixed grid. The airspeed is sensed by a pitot-static pressure transducer and is presented in numerical form at the top center of the display.
Estimation of canopy carotenoid content of winter wheat using multi-angle hyperspectral data
NASA Astrophysics Data System (ADS)
Kong, Weiping; Huang, Wenjiang; Liu, Jiangui; Chen, Pengfei; Qin, Qiming; Ye, Huichun; Peng, Dailiang; Dong, Yingying; Mortimer, A. Hugh
2017-11-01
Precise estimation of carotenoid (Car) content in crops, using remote sensing data, could be helpful for agricultural resources management. Conventional methods for Car content estimation were mostly based on reflectance data acquired from nadir direction. However, reflectance acquired at this direction is highly influenced by canopy structure and soil background reflectance. Off-nadir observation is less impacted, and multi-angle viewing data are proven to contain additional information rarely exploited for crop Car content estimation. The objective of this study was to explore the potential of multi-angle observation data for winter wheat canopy Car content estimation. Canopy spectral reflectance was measured from nadir as well as from a series of off-nadir directions during different growing stages of winter wheat, with concurrent canopy Car content measurements. Correlation analyses were performed between Car content and the original and continuum removed spectral reflectance. Spectral features and previously published indices were derived from data obtained at different viewing angles and were tested for Car content estimation. Results showed that spectral features and indices obtained from backscattering directions between 20° and 40° view zenith angle had a stronger correlation with Car content than that from the nadir direction, and the strongest correlation was observed from about 30° backscattering direction. Spectral absorption depth at 500 nm derived from spectral data obtained from 30° backscattering direction was found to reduce the difference induced by plant cultivars greatly. It was the most suitable for winter wheat canopy Car estimation, with a coefficient of determination 0.79 and a root mean square error of 19.03 mg/m2. This work indicates the importance of taking viewing geometry effect into account when using spectral features/indices and provides new insight in the application of multi-angle remote sensing for the estimation of crop physiology.
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan; Eom, Tae Bong
2011-11-01
We present an angle generator with high resolution and accuracy, which uses multiple ultrasonic motors and a self-calibratable encoder. A cylindrical air bearing guides a rotational motion, and the ultrasonic motors achieve high resolution over the full circle range with a simple configuration. The self-calibratable encoder can compensate the scale error of a divided circle (signal period: 20") effectively by applying the equal-division-averaged method. The angle generator configures a position feedback control loop using the readout of the encoder. By combining the ac and dc operation mode, the angle generator produced stepwise angular motion with 0.005" resolution. We also evaluated the performance of the angle generator using a precision angle encoder and an autocollimator. The expanded uncertainty (k = 2) in the angle generation was estimated less than 0.03", which included the calibrated scale error and the nonlinearity error. © 2011 American Institute of Physics
Liu, Zhe; Jiang, Liwei; Zheng, Yisong
2015-02-04
By means of an appropriate wave function connection condition, we study the electronic structure of a line defect superlattice of graphene with the Dirac equation method. We obtain the analytical dispersion relation, which can simulate well the tight-binding numerical result about the band structure of the superlattice. Then, we generalize this theoretical method to study the electronic transmission through a potential barrier where multiple line defects are periodically patterned. We find that there exists a critical incident angle which restricts the electronic transmission through multiple line defects within a specific incident angle range. The critical angle depends sensitively on the potential barrier height, which can be modulated by a gate voltage. As a result, non-trivial transmissions of K and K' valley electrons are restricted, respectively, in two distinct ranges of the incident angle. Our theoretical result demonstrates that a gate voltage can act as a feasible measure to tune the valley polarization when electrons tunnel through multiple line defects.
Minimum viewing angle for visually guided ground speed control in bumblebees.
Baird, Emily; Kornfeldt, Torill; Dacke, Marie
2010-05-01
To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.
Characterization Approaches to Place Invariant Sites on SI-Traceable Scales
NASA Technical Reports Server (NTRS)
Thome, Kurtis
2012-01-01
The effort to understand the Earth's climate system requires a complete integration of remote sensing imager data across time and multiple countries. Such an integration necessarily requires ensuring inter-consistency between multiple sensors to create the data sets needed to understand the climate system. Past efforts at inter-consistency have forced agreement between two sensors using sources that are viewed by both sensors at nearly the same time, and thus tend to be near polar regions over snow and ice. The current work describes a method that would provide an absolute radiometric calibration of a sensor rather than an inter-consistency of a sensor relative to another. The approach also relies on defensible error budgets that eventually provides a cross comparison of sensors without systematic errors. The basis of the technique is a model-based, SI-traceable prediction of at-sensor radiance over selected sites. The predicted radiance would be valid for arbitrary view and illumination angles and for any date of interest that is dominated by clear-sky conditions. The effort effectively works to characterize the sites as sources with known top-of-atmosphere radiance allowing accurate intercomparison of sensor data that without the need for coincident views. Data from the Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), and Moderate Resolution Imaging Spectroradiometer (MODIS) are used to demonstrate the difficulties of cross calibration as applied to current sensors. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The radiance comparisons lead to significant differences created by the specific solar model used for each sensor. The paper also proposes methods to mitigate the largest error sources in future systems. The results from these historical intercomparisons provide the basis for a set of recommendations to ensure future SI-traceable cross calibration using future missions such as CLARREO and TRUTHS. The paper describes a proposed approach that relies on model-based, SI-traceable predictions of at-sensor radiance over selected sites. The predicted radiance would be valid for arbitrary view and illumination angles and for any date of interest that is dominated by clear-sky conditions. The basis of the method is highly accurate measurements of at-sensor radiance of sufficient quality to understand the spectral and BRDF characteristics of the site and sufficient historical data to develop an understanding of temporal effects from changing surface and atmospheric conditions.
SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotoku, J; Nakabayashi, S; Kumagai, S
Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image.more » We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)« less
Multiple View Zenith Angle Observations of Reflectance From Ponderosa Pine Stands
NASA Technical Reports Server (NTRS)
Johnson, Lee F.; Lawless, James G. (Technical Monitor)
1994-01-01
Reflectance factors (RF(lambda)) from dense and sparse ponderosa pine (Pinus ponderosa) stands, derived from radiance data collected in the solar principal plane by the Advanced Solid-State Array Spectro-radiometer (ASAS), were examined as a function of view zenith angle (theta(sub v)). RF(lambda) was maximized with theta(sub v) nearest the solar retrodirection, and minimized near the specular direction throughout the ASAS spectral region. The dense stand had much higher RF anisotropy (ma)dmurn RF is minimum RF) in the red region than did the sparse stand (relative differences of 5.3 vs. 2.75, respectively), as a function of theta(sub v), due to the shadow component in the canopy. Anisotropy in the near-infrared (NIR) was more similar between the two stands (2.5 in the dense stand and 2.25 in the sparse stand); the dense stand exhibited a greater hotspot effect than 20 the sparse stand in this spectral region. Two common vegetation transforms, the NIR/red ratio and the normalized difference vegetation index (NDVI), both showed a theta(sub v) dependence for the dense stand. Minimum values occurred near the retrodirection and maximum values occurred near the specular direction. Greater relative differences were noted for the NIR/red ratio (2.1) than for the NDVI (1.3). The sparse stand showed no obvious dependence on theta(sub v) for either transform, except for slightly elevated values toward the specular direction.
Shortwave radiation parameterization scheme for subgrid topography
NASA Astrophysics Data System (ADS)
Helbig, N.; LöWe, H.
2012-02-01
Topography is well known to alter the shortwave radiation balance at the surface. A detailed radiation balance is therefore required in mountainous terrain. In order to maintain the computational performance of large-scale models while at the same time increasing grid resolutions, subgrid parameterizations are gaining more importance. A complete radiation parameterization scheme for subgrid topography accounting for shading, limited sky view, and terrain reflections is presented. Each radiative flux is parameterized individually as a function of sky view factor, slope and sun elevation angle, and albedo. We validated the parameterization with domain-averaged values computed from a distributed radiation model which includes a detailed shortwave radiation balance. Furthermore, we quantify the individual topographic impacts on the shortwave radiation balance. Rather than using a limited set of real topographies we used a large ensemble of simulated topographies with a wide range of typical terrain characteristics to study all topographic influences on the radiation balance. To this end slopes and partial derivatives of seven real topographies from Switzerland and the United States were analyzed and Gaussian statistics were found to best approximate real topographies. Parameterized direct beam radiation presented previously compared well with modeled values over the entire range of slope angles. The approximation of multiple, anisotropic terrain reflections with single, isotropic terrain reflections was confirmed as long as domain-averaged values are considered. The validation of all parameterized radiative fluxes showed that it is indeed not necessary to compute subgrid fluxes in order to account for all topographic influences in large grid sizes.
NASA Astrophysics Data System (ADS)
Garay, Michael J.; Kalashnikova, Olga V.; Bull, Michael A.
2017-04-01
Since early 2000, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite has been acquiring data that have been used to produce aerosol optical depth (AOD) and particle property retrievals at 17.6 km spatial resolution. Capitalizing on the capabilities provided by multi-angle viewing, the current operational (Version 22) MISR algorithm performs well, with about 75 % of MISR AOD retrievals globally falling within 0.05 or 20 % × AOD of paired validation data from the ground-based Aerosol Robotic Network (AERONET). This paper describes the development and assessment of a prototype version of a higher-spatial-resolution 4.4 km MISR aerosol optical depth product compared against multiple AERONET Distributed Regional Aerosol Gridded Observations Network (DRAGON) deployments around the globe. In comparisons with AERONET-DRAGON AODs, the 4.4 km resolution retrievals show improved correlation (r = 0. 9595), smaller RMSE (0.0768), reduced bias (-0.0208), and a larger fraction within the expected error envelope (80.92 %) relative to the Version 22 MISR retrievals.
SAR (Synthetic Aperture Radar). Earth observing system. Volume 2F: Instrument panel report
NASA Technical Reports Server (NTRS)
1987-01-01
The scientific and engineering requirements for the Earth Observing System (EOS) imaging radar are provided. The radar is based on Shuttle Imaging Radar-C (SIR-C), and would include three frequencies: 1.25 GHz, 5.3 GHz, and 9.6 GHz; selectable polarizations for both transmit and receive channels; and selectable incidence angles from 15 to 55 deg. There would be three main viewing modes: a local high-resolution mode with typically 25 m resolution and 50 km swath width; a regional mapping mode with 100 m resolution and up to 200 km swath width; and a global mapping mode with typically 500 m resolution and up to 700 km swath width. The last mode allows global coverage in three days. The EOS SAR will be the first orbital imaging radar to provide multifrequency, multipolarization, multiple incidence angle observations of the entire Earth. Combined with Canadian and Japanese satellites, continuous radar observation capability will be possible. Major applications in the areas of glaciology, hydrology, vegetation science, oceanography, geology, and data and information systems are described.
Development of the Multi-Angle Stratospheric Aerosol Radiometer (MASTAR) Instrument
NASA Astrophysics Data System (ADS)
DeLand, M. T.; Colarco, P. R.; Kowalewski, M. G.; Gorkavyi, N.; Ramos-Izquierdo, L.
2017-12-01
Aerosol particles in the stratosphere ( 15-25 km altitude), both produced naturally and perturbed by volcanic eruptions and anthropogenic emissions, continue to be a source of significant uncertainty in the Earth's energy budget. Stratospheric aerosols can offset some of the warming effects caused by greenhouse gases. These aerosols are currently monitored using measurements from the Ozone Mapping and Profiling Suite (OMPS) Limb Profiler (LP) instrument on the Suomi NPP satellite. In order to improve the sensitivity and spatial coverage of these aerosol data, we are developing an aerosol-focused compact version of the OMPS LP sensor called Multi-Angle Stratospheric Aerosol Radiometer (MASTAR) to fly on a 3U Cubesat satellite, using a NASA Instrument Incubator Program (IIP) grant. This instrument will make limb viewing measurements of the atmosphere in multiple directions simultaneously, and uses only a few selected wavelengths to reduce size and cost. An initial prototype version has been constructed using NASA GSFC internal funding and tested in the laboratory. Current design work is targeted towards a preliminary field test in Spring 2018. We will discuss the scientific benefits of MASTAR and the status of the project.
Aspects of Voyager photogrammetry
NASA Technical Reports Server (NTRS)
Wu, Sherman S. C.; Schafer, Francis J.; Jordan, Raymond; Howington, Annie-Elpis
1987-01-01
In January 1986, Voyager 2 took a series of pictures of Uranus and its satellites with the Imaging Science System (ISS) on board the spacecraft. Based on six stereo images from the ISS narrow-angle camera, a topographic map was compiled of the Southern Hemisphere of Miranda, one of Uranus' moons. Assuming a spherical figure, a 20-km surface relief is shown on the map. With three additional images from the ISS wide-angle camera, a control network of Miranda's Southern Hemisphere was established by analytical photogrammetry, producing 88 ground points for the control of multiple-model compilation on the AS-11AM analytical stereoplotter. Digital terrain data from the topographic map of Miranda have also been produced. By combining these data and the image data from the Voyager 2 mission, perspective views or even a movie of the mapped area can be made. The application of these newly developed techniques to Voyager 1 imagery, which includes a few overlapping pictures of Io and Ganymede, permits the compilation of contour maps or topographic profiles of these bodies on the analytical stereoplotters.
Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.
Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen
2016-03-21
Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.
Optimization of spine surgery planning with 3D image templating tools
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Huddleston, Paul M.; Holmes, David R., III; Shridharani, Shyam M.; Robb, Richard A.
2008-03-01
The current standard of care for patients with spinal disorders involves a thorough clinical history, physical exam, and imaging studies. Simple radiographs provide a valuable assessment but prove inadequate for surgery planning because of the complex 3-dimensional anatomy of the spinal column and the close proximity of the neural elements, large blood vessels, and viscera. Currently, clinicians still use primitive techniques such as paper cutouts, pencils, and markers in an attempt to analyze and plan surgical procedures. 3D imaging studies are routinely ordered prior to spine surgeries but are currently limited to generating simple, linear and angular measurements from 2D views orthogonal to the central axis of the patient. Complex spinal corrections require more accurate and precise calculation of 3D parameters such as oblique lengths, angles, levers, and pivot points within individual vertebra. We have developed a clinician friendly spine surgery planning tool which incorporates rapid oblique reformatting of each individual vertebra, followed by interactive templating for 3D placement of implants. The template placement is guided by the simultaneous representation of multiple 2D section views from reformatted orthogonal views and a 3D rendering of individual or multiple vertebrae enabling superimposition of virtual implants. These tools run efficiently on desktop PCs typically found in clinician offices or workrooms. A preliminary study conducted with Mayo Clinic spine surgeons using several actual cases suggests significantly improved accuracy of pre-operative measurements and implant localization, which is expected to increase spinal procedure efficiency and safety, and reduce time and cost of the operation.
SPACE FOR AUDIO-VISUAL LARGE GROUP INSTRUCTION.
ERIC Educational Resources Information Center
GAUSEWITZ, CARL H.
WITH AN INCREASING INTEREST IN AND UTILIZATION OF AUDIO-VISUAL MEDIA IN EDUCATION FACILITIES, IT IS IMPORTANT THAT STANDARDS ARE ESTABLISHED FOR ESTIMATING THE SPACE REQUIRED FOR VIEWING THESE VARIOUS MEDIA. THIS MONOGRAPH SUGGESTS SUCH STANDARDS FOR VIEWING AREAS, VIEWING ANGLES, SEATING PATTERNS, SCREEN CHARACTERISTICS AND EQUIPMENT PERFORMANCES…
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
The moon illusion: a different view through the legs.
Coren, S
1992-12-01
The fact that the overestimation of the horizon moon is reduced when individuals bend over and view it through their legs has been used as support for theories of the moon illusion based upon angle of regard and vestibular inputs. Inversion of the visual scene, however, can also reduce the salience of depth cue, so illusion reduction might be consistent with size constancy explanations. A sample of 70 subjects viewed normal and inverted pictorial arrays. The moon illusion was reduced in the inverted arrays, suggesting that the "through the legs" reduction of the moon illusion may reflect the alteration in perceived depth associated with scene inversion rather than angle of regard or vestibular effects.
2015-10-15
NASA's Cassini spacecraft zoomed by Saturn's icy moon Enceladus on Oct. 14, 2015, capturing this stunning image of the moon's north pole. A companion view from the wide-angle camera (PIA20010) shows a zoomed out view of the same region for context. Scientists expected the north polar region of Enceladus to be heavily cratered, based on low-resolution images from the Voyager mission, but high-resolution Cassini images show a landscape of stark contrasts. Thin cracks cross over the pole -- the northernmost extent of a global system of such fractures. Before this Cassini flyby, scientists did not know if the fractures extended so far north on Enceladus. North on Enceladus is up. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 4,000 miles (6,000 kilometers) from Enceladus and at a Sun-Enceladus-spacecraft, or phase, angle of 9 degrees. Image scale is 115 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19660
NASA Technical Reports Server (NTRS)
Simard, M.; Riel, Bryan; Hensley, S.; Lavalle, Marco
2011-01-01
Radar backscatter data contain both geometric and radiometric distortions due to underlying topography and the radar viewing geometry. Our objective is to develop a radiometric correction algorithm specific to the UAVSAR system configuration that would improve retrieval of forest structure parameters. UAVSAR is an airborne Lband radar capable of repeat?pass interferometry producing images with a spatial resolution of 5m. It is characterized by an electronically steerable antenna to compensate for aircraft attitude. Thus, the computation of viewing angles (i.e. look, incidence and projection) must include aircraft attitude angles (i.e. yaw, pitch and roll) in addition to the antenna steering angle. In this presentation, we address two components of radiometric correction: area projection and vegetation reflectivity. The first correction is applied by normalization of the radar backscatter by the local ground area illuminated by the radar beam. The second is a correction due to changes in vegetation reflectivity with viewing geometry.
2015-11-09
Although Epimetheus appears to be lurking above the rings here, it's actually just an illusion resulting from the viewing angle. In reality, Epimetheus and the rings both orbit in Saturn's equatorial plane. Inner moons and rings orbit very near the equatorial plane of each of the four giant planets in our solar system, but more distant moons can have orbits wildly out of the equatorial plane. It has been theorized that the highly inclined orbits of the outer, distant moons are remnants of the random directions from which they approached the planets they orbit. This view looks toward the unilluminated side of the rings from about -0.3 degrees below the ringplane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 26, 2015. The view was obtained at a distance of approximately 500,000 miles (800,000 kilometers) from Epimetheus and at a Sun-Epimetheus-spacecraft, or phase, angle of 62 degrees. Image scale is 3 miles (5 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18342
Emissive and reflective properties of curved displays in relation to image quality
NASA Astrophysics Data System (ADS)
Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique; Blanc, Pierre; Sandré-Chardonnal, Etienne
2016-03-01
Different aspects of the characterization of curved displays are presented. The limit of validity of viewing angle measurements without angular distortion on such displays using goniometer or Fourier optics viewing angle instrument is given. If the condition cannot be fulfilled the measurement can be corrected using a general angular distortion formula as demonstrated experimentally using a Samsung Galaxy S6 edge phone display. The reflective properties of the display are characterized by measuring the spectral BRDF using a multispectral Fourier optics viewing angle system. The surface of a curved OLED TV has been measured. The BDRF patterns show a mirror like behavior with and additional strong diffraction along the pixels lines and columns that affect the quality of the display when observed with parasitic lighting. These diffraction effects are very common on OLED surfaces. We finally introduce a commercial ray tracing software that can use directly the measured emissive and reflective properties of the display to make realistic simulation under any lighting environment.
Objective lens simultaneously optimized for pupil ghosting, wavefront delivery and pupil imaging
NASA Technical Reports Server (NTRS)
Olczak, Eugene G (Inventor)
2011-01-01
An objective lens includes multiple optical elements disposed between a first end and a second end, each optical element oriented along an optical axis. Each optical surface of the multiple optical elements provides an angle of incidence to a marginal ray that is above a minimum threshold angle. This threshold angle minimizes pupil ghosts that may enter an interferometer. The objective lens also optimizes wavefront delivery and pupil imaging onto an optical surface under test.
Atmospheric Science Data Center
2013-04-17
article title: Coccoliths in the Celtic Sea View Larger Image As ... This image is a natural-color view of the Celtic Sea and English Channel regions, and was acquired by the Multi-angle Imaging ...
Active Planning, Sensing and Recognition Using a Resource-Constrained Discriminant POMDP
2014-06-28
classes of military vehicles, with sample images shown in Fig. 1. The vehicles were captured from various angles. 4785 images with depression angles 17...and 30◦ are used for training, and 4351 images with depression angles 15◦ and 45◦ are used for testing. The azimuth angles are quantized into 12...selection by collecting the engine sounds for the 8 vehicle classes from the Youtube . The sounds are attenuated differently in 6 view directions
Geometry of the Large Magellanic Cloud Using Multi- wavelength Photometry of Classical Cepheids
NASA Astrophysics Data System (ADS)
Deb, Sukanta; Ngeow, Chow-Choong; Kanbur, Shashi M.; Singh, Harinder P.; Wysocki, Daniel; Kumar, Subhash
2018-05-01
We determine the geometrical and viewing angle parameters of the Large Magellanic Cloud (LMC) using the Leavitt law based on a sample of more than 3500 common classical Cepheids (FU and FO) in optical (V, I), near-infrared (JHKs) and mid-infrared ([3.6] μm and [4.5] μm) photometric bands. Statistical reddening and distance modulus free from the effect of reddening to each of the individual Cepheids are obtained using the simultaneous multi-band fit to the apparent distance moduli from the analysis of the resulting Leavitt laws in these seven photometric bands. A reddening map of the LMC obtained from the analysis shows good agreement with the other maps available in the literature. Extinction free distance measurements along with the information of the equatorial coordinates (α, δ) for individual stars are used to obtain the corresponding Cartesian coordinates with respect to the plane of the sky. By fitting a plane solution of the form z = f(x, y) to the observed three dimensional distribution, the following viewing angle parameters of the LMC are obtained: inclination angle i = 25°.110 ± 0°.365, position angle of line of nodes θlon = 154°.702 ± 1°.378. On the other hand, modelling the observed three dimensional distribution of the Cepheids as a triaxial ellipsoid, the following values of the geometrical axes ratios of the LMC are obtained: 1.000 ± 0.003: 1.151 ± 0.003: 1.890 ± 0.014 with the viewing angle parameters: inclination angle of i = 11°.920 ± 0°.315 with respect to the longest axis from the line of sight and position angle of line of nodes θlon = 128°.871 ± 0°.569. The position angles are measured eastwards from north.
Kanamori, Yoshiaki; Ozaki, Toshikazu; Hane, Kazuhiro
2014-10-20
We fabricated reflection color filters of the three primary colors with wide viewing angles using silicon two-dimensional subwavelength gratings on the same quartz substrate. The grating periods were 400, 340, and 300 nm for red, green, and blue filters, respectively. All of the color filters had the same grating thickness of 100 nm, which enabled simple fabrication of a color filter array. Reflected colors from the red, green, and blue filters under s-polarized white-light irradiation appeared in the respective colors at incident angles from 0 to 50°. By rigorous coupled-wave analysis, the dimensions of each color filter were designed, and the calculated reflectivity was compared with the measured reflectivity.
The influence of radiographic viewing perspective and demographics on the Critical Shoulder Angle
Suter, Thomas; Popp, Ariane Gerber; Zhang, Yue; Zhang, Chong; Tashjian, Robert Z.; Henninger, Heath B.
2014-01-01
Background Accurate assessment of the critical shoulder angle (CSA) is important in clinical evaluation of degenerative rotator cuff tears. This study analyzed the influence of radiographic viewing perspective on the CSA, developed a classification system to identify malpositioned radiographs, and assessed the relationship between the CSA and demographic factors. Methods Glenoid height, width and retroversion were measured on 3D CT reconstructions of 68 cadaver scapulae. A digitally reconstructed radiograph was aligned perpendicular to the scapular plane, and retroversion was corrected to obtain a true antero-posterior (AP) view. In 10 scapulae, incremental anteversion/retroversion and flexion/extension views were generated. The CSA was measured and a clinically applicable classification system was developed to detect views with >2° change in CSA versus true AP. Results The average CSA was 33±4°. Intra- and inter-observer reliability was high (ICC≥0.81) but decreased with increasing viewing angle. Views beyond 5° anteversion, 8° retroversion, 15° flexion and 26° extension resulted in >2° deviation of the CSA compared to true AP. The classification system was capable of detecting aberrant viewing perspectives with sensitivity of 95% and specificity of 53%. Correlations between glenoid size and CSA were small (R≤0.3), and CSA did not vary by gender (p=0.426) or side (p=0.821). Conclusions The CSA was most susceptible to malposition in ante/retroversion. Deviations as little as 5° in anteversion resulted in a CSA >2° from true AP. A new classification system refines the ability to collect true AP radiographs of the scapula. The CSA was unaffected by demographic factors. PMID:25591458
Microwave Brightness Temperatures of Tilted Convective Systems
NASA Technical Reports Server (NTRS)
Hong, Ye; Haferman, Jeffrey L.; Olson, William S.; Kummerow, Christian D.
1998-01-01
Aircraft and ground-based radar data from the Tropical Ocean and Global Atmosphere Coupled-Ocean Atmosphere Response Experiment (TOGA COARE) show that convective systems are not always vertical. Instead, many are tilted from vertical. Satellite passive microwave radiometers observe the atmosphere at a viewing angle. For example, the Special Sensor Microwave/Imager (SSM/I) on Defense Meteorological Satellite Program (DMSP) satellites and the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI) on the TRMM satellite have an incident angle of about 50deg. Thus, the brightness temperature measured from one direction of tilt may be different than that viewed from the opposite direction due to the different optical depth. This paper presents the investigation of passive microwave brightness temperatures of tilted convective systems. To account for the effect of tilt, a 3-D backward Monte Carlo radiative transfer model has been applied to a simple tilted cloud model and a dynamically evolving cloud model to derive the brightness temperature. The radiative transfer results indicate that brightness temperature varies when the viewing angle changes because of the different optical depth. The tilt increases the displacements between high 19 GHz brightness temperature (Tb(sub 19)) due to liquid emission from lower level of cloud and the low 85 GHz brightness temperature (Tb(sub 85)) due to ice scattering from upper level of cloud. As the resolution degrades, the difference of brightness temperature due to the change of viewing angle decreases dramatically. The dislocation between Tb(sub 19) and Tb(sub 85), however, remains prominent.
Ozmeric, A; Yucens, M; Gultaç, E; Açar, H I; Aydogan, N H; Gül, D; Alemdaroglu, K B
2015-05-01
We hypothesised that the anterior and posterior walls of the body of the first sacral vertebra could be visualised with two different angles of inlet view, owing to the conical shape of the sacrum. Six dry male cadavers with complete pelvic rings and eight dry sacrums with K-wires were used to study the effect of canting (angling the C-arm) the fluoroscope towards the head in 5° increments from 10° to 55°. Fluoroscopic images were taken in each position. Anterior and posterior angles of inclination were measured between the upper sacrum and the vertical line on the lateral view. Three authors separately selected the clearest image for overlapping anterior cortices and the upper sacral canal in the cadaveric models. The dry bone and K-wire models were scored by the authors, being sure to check whether the K-wire was in or out. In the dry bone models the mean score of the relevant inlet position of the anterior or posterior inclination was 8.875 (standard deviation (sd) 0.35), compared with the inlet position of the opposite inclination of -5.75 (sd 4.59). We found that two different inlet views should be used separately to evaluate the borders of the body of the sacrum using anterior and posterior inclination angles of the sacrum, during placement of iliosacral screws. ©2015 The British Editorial Society of Bone & Joint Surgery.
2010-05-26
NASA Cassini spacecraft looks toward the limb of Saturn and, on the right of this image, views part of the rings through the planet atmosphere. Saturn atmosphere can distort the view of the rings from some angles.
Atmospheric Science Data Center
2014-05-15
article title: Los Alamos, New Mexico View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...
A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy
NASA Astrophysics Data System (ADS)
Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.
1989-05-01
A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.
Challenging Popular Media's Control by Teaching Critical Viewing.
ERIC Educational Resources Information Center
Couch, Richard A.
The purpose of this paper is to express the importance of visual/media literacy and the teaching of critical television viewing. An awareness of the properties and characteristics of television--including camera angles and placement, editing, and emotionally involving subject matter--aids viewers in the critical viewing process. The knowledge of…
Multiple Emission Angle Surface-Atmosphere Separations of MGS Thermal Emission Spectrometer Data
NASA Technical Reports Server (NTRS)
Bandfield, J. L.; Smith, M. D.
2001-01-01
Multiple emission angle observations taken by MGS-TES have been used to derive atmospheric opacities and surface temperatures and emissivities with increased accuracy and wavelength coverage. Martian high albedo region surface spectra have now been isolated. Additional information is contained in the original extended abstract.
High-resolution AM LCD development for avionic applications
NASA Astrophysics Data System (ADS)
Lamberth, Larry S.; Laddu, Ravindra R.; Harris, Doug; Sarma, Kalluri R.; Li, Wang-Yang; Chien, C. C.; Chu, C. Y.; Lee, C. S.; Kuo, Chen-Lung
2003-09-01
For the first time, an avionic grade MVA AM LCD with wide viewing angle has been developed for use in either landscape or portrait mode. The development of a high resolution Multi-domain Vertical Alignment (MVA) Active Matrix Liquid Crystal Display (AM LCD) is described. Challenges met in this development include achieving the required performance with high luminance and sunlight readability while meeting stringent optical (image quality) and environmental performance requirements of avionics displays. In this paper the optical and environmental performance of this high resolution 14.1" MVA-AM-LCD are discussed and some performance comparisons to conventional AM-LCDs are documented. This AM LCD has found multiple Business Aviation and Military display applications and cockpit pictures are presented.
High-efficiency directional backlight design for an automotive display.
Chen, Bo-Tsuen; Pan, Jui-Wen
2018-06-01
We propose a high-efficiency directional backlight module (DBM) for automotive display applications. The DBM is composed of light sources, a light guide plate (LGP), and an optically patterned plate (OPP). The LGP has a collimator on the input surface that serves to control the angle of the light emitted to be in the horizontal direction. The OPP has an inverse prism to adjust the light emission angle in the vertical direction. The DBM has a simple structure and high optical efficiency. Compared with conventional backlight systems, the DBM has higher optical efficiency and a suitable viewing angle. This is an improvement in normalized on-axis luminous intensity of 2.6 times and a twofold improvement in optical efficiency. The viewing angles are 100° in the horizontal direction and 35° in the vertical direction. The angle of the half-luminous intensity is 72° in the horizontal direction and 20° in the vertical direction. The uniformity of the illuminance reaches 82%. The DBM is suitable for use in the center information displays of automobiles.
NASA Astrophysics Data System (ADS)
Yi, Bo; Shen, Huifang
2018-01-01
Non-iridescent structural colors and lotus effect universally existing in the nature provide a great inspiration for artificially developing angle-independent and high hydrophobic structurally colored films. To this end, a facile strategy is put forward for achieving superhydrophobic structurally colored films with wide viewing angles and high visibility based on bumpy melanin-like polydopamine-coated polystyrene particles. Here, hierarchical and amorphous structures are assembled in a self-driven manner due to particles' protrusive surfaces. The superhydrophobicity of the structurally colored films, with water contact angle up to 151°, is realized by combining the hierarchical surface roughness with a dip-coating process of polydimethylsiloxane-hexane solution, while angle-independence revealed in the films is ascribed to amorphous arrays. In addition, benefited from an essential light-absorbing property and high refractive index of polydopamine, the visibility of as-prepared colored films is fundamentally enhanced. Moreover, the mechanical robustness of the films is considerably boosted by inletting 3-aminopropyltriethoxysilane. This fabrication strategy might provide an opportunity for promoting the open-air application of structurally colored coatings.
Impact of basic angle variations on the parallax zero point for a scanning astrometric satellite
NASA Astrophysics Data System (ADS)
Butkevich, Alexey G.; Klioner, Sergei A.; Lindegren, Lennart; Hobbs, David; van Leeuwen, Floor
2017-07-01
Context. Determination of absolute parallaxes by means of a scanning astrometric satellite such as Hipparcos or Gaia relies on the short-term stability of the so-called basic angle between the two viewing directions. Uncalibrated variations of the basic angle may produce systematic errors in the computed parallaxes. Aims: We examine the coupling between a global parallax shift and specific variations of the basic angle, namely those related to the satellite attitude with respect to the Sun. Methods: The changes in observables produced by small perturbations of the basic angle, attitude, and parallaxes were calculated analytically. We then looked for a combination of perturbations that had no net effect on the observables. Results: In the approximation of infinitely small fields of view, it is shown that certain perturbations of the basic angle are observationally indistinguishable from a global shift of the parallaxes. If these kinds of perturbations exist, they cannot be calibrated from the astrometric observations but will produce a global parallax bias. Numerical simulations of the astrometric solution, using both direct and iterative methods, confirm this theoretical result. For a given amplitude of the basic angle perturbation, the parallax bias is smaller for a larger basic angle and a larger solar aspect angle. In both these respects Gaia has a more favourable geometry than Hipparcos. In the case of Gaia, internal metrology is used to monitor basic angle variations. Additionally, Gaia has the advantage of detecting numerous quasars, which can be used to verify the parallax zero point.
MODIS Solar Diffuser On-Orbit Degradation Characterization Using Improved SDSM Screen Modeling
NASA Technical Reports Server (NTRS)
Chen, H.; Xiong, Xiaoxiong; Angal, Amit Avinash; Wang, Z.; Wu, A.
2016-01-01
The Solar Diffuser (SD) is used for the MODIS reflective solar bands (RSB) calibration. An on-board Solar Diffuser Stability Monitor (SDSM) tracks the degradation of its on-orbit bi-directional reflectance factor (BRF). To best match the SDSM detector signals from its Sun view and SD view, a fixed attenuation screen is placed in its Sun view path, where the responses show ripples up to 10%, much larger than design expectation. Algorithms have been developed since the mission beginning to mitigate the impacts of these ripples. In recent years, a look-up-table (LUT) based approach has been implemented to account for these ripples. The LUT modeling of the elevation and azimuth angles is constructed from the detector 9 (D9) of SDSM observations in the MODIS early mission. The response of other detectors is normalized to D9 to reduce the ripples observed in the sun-view data. The accuracy of all detectors degradation estimation depends on how well the D9 approximated. After multiple years of operation (Terra: 16 years; Aqua: 14 years), degradation behavior of all detectors can be monitored by their own. This paper revisits the LUT modeling and proposes a dynamic scheme to build a LUT independently for each detector. Further refinement in the Sun view screen characterization will be highlighted to ensure the degradation estimation accuracy. Results of both Terra and Aqua SD on-orbit degradation are derived from the improved modeling and curve fitting strategy.
The roles of perceptual and conceptual information in face recognition.
Schwartz, Linoy; Yovel, Galit
2016-11-01
The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Modeling digital breast tomosynthesis imaging systems for optimization studies
NASA Astrophysics Data System (ADS)
Lau, Beverly Amy
Digital breast tomosynthesis (DBT) is a new imaging modality for breast imaging. In tomosynthesis, multiple images of the compressed breast are acquired at different angles, and the projection view images are reconstructed to yield images of slices through the breast. One of the main problems to be addressed in the development of DBT is the optimal parameter settings to obtain images ideal for detection of cancer. Since it would be unethical to irradiate women multiple times to explore potentially optimum geometries for tomosynthesis, it is ideal to use a computer simulation to generate projection images. Existing tomosynthesis models have modeled scatter and detector without accounting for oblique angles of incidence that tomosynthesis introduces. Moreover, these models frequently use geometry-specific physical factors measured from real systems, which severely limits the robustness of their algorithms for optimization. The goal of this dissertation was to design the framework for a computer simulation of tomosynthesis that would produce images that are sensitive to changes in acquisition parameters, so an optimization study would be feasible. A computer physics simulation of the tomosynthesis system was developed. The x-ray source was modeled as a polychromatic spectrum based on published spectral data, and inverse-square law was applied. Scatter was applied using a convolution method with angle-dependent scatter point spread functions (sPSFs), followed by scaling using an angle-dependent scatter-to-primary ratio (SPR). Monte Carlo simulations were used to generate sPSFs for a 5-cm breast with a 1-cm air gap. Detector effects were included through geometric propagation of the image onto layers of the detector, which were blurred using depth-dependent detector point-spread functions (PRFs). Depth-dependent PRFs were calculated every 5-microns through a 200-micron thick CsI detector using Monte Carlo simulations. Electronic noise was added as Gaussian noise as a last step of the model. The sPSFs and detector PRFs were verified to match published data, and noise power spectrum (NPS) from simulated flat field images were shown to match empirically measured data from a digital mammography unit. A novel anthropomorphic software breast phantom was developed for 3D imaging simulation. Projection view images of the phantom were shown to have similar structure as real breasts in the spatial frequency domain, using the power-law exponent beta to quantify tissue complexity. The physics simulation and computer breast phantom were used together, following methods from a published study with real tomosynthesis images of real breasts. The simulation model and 3D numerical breast phantoms were able to reproduce the trends in the experimental data. This result demonstrates the ability of the tomosynthesis physics model to generate images sensitive to changes in acquisition parameters.
Multiple-Fiber-Optic Probe For Light-Scattering Measurements
NASA Technical Reports Server (NTRS)
Dhadwal, Harbans Singh; Ansari, Rafat R.
1996-01-01
Multiple-fiber-optical probe developed for use in measuring light scattered at various angles from specimens of materials. Designed for both static and dynamic light-scattering measurements of colloidal dispersions. Probe compact, rugged unit containing no moving parts and remains stationary during operation. Not restricted to operation in controlled, research-laboratory environment. Positioned inside or outside light-scattering chamber. Provides simultaneous measurements at small angular intervals over range of angles, made to include small scattering angles by orienting probe in appropriate direction.
Solar axion search technique with correlated signals from multiple detectors
Xu, Wenqin; Elliott, Steven R.
2017-01-25
The coherent Bragg scattering of photons converted from solar axions inside crystals would boost the signal for axion-photon coupling enhancing experimental sensitivity for these hypothetical particles. Knowledge of the scattering angle of solar axions with respect to the crystal lattice is required to make theoretical predications of signal strength. Hence, both the lattice axis angle within a crystal and the absolute angle between the crystal and the Sun must be known. In this paper, we examine how the experimental sensitivity changes with respect to various experimental parameters. We also demonstrate that, in a multiple-crystal setup, knowledge of the relative axismore » orientation between multiple crystals can improve the experimental sensitivity, or equivalently, relax the precision on the absolute solar angle measurement. However, if absolute angles of all crystal axes are measured, we find that a precision of 2°–4° will suffice for an energy resolution of σ E = 0.04E and a flat background. Lastly, we also show that, given a minimum number of detectors, a signal model averaged over angles can substitute for precise crystal angular measurements, with some loss of sensitivity.« less
McNabb, Ryan P.; Challa, Pratap; Kuo, Anthony N.; Izatt, Joseph A.
2015-01-01
Clinically, gonioscopy is used to provide en face views of the ocular angle. The angle has been imaged with optical coherence tomography (OCT) through the corneoscleral limbus but is currently unable to image the angle from within the ocular anterior chamber. We developed a novel gonioscopic OCT system that images the angle circumferentially from inside the eye through a custom, radially symmetric, gonioscopic contact lens. We present, to our knowledge, the first 360° circumferential volumes (two normal subjects, two subjects with pathology) of peripheral iris and iridocorneal angle structures obtained via an internal approach not typically available in the clinic. PMID:25909021
Distinguishing Clouds from Ice over the East Siberian Sea, Russia
NASA Technical Reports Server (NTRS)
2002-01-01
As a consequence of its capability to retrieve cloud-top elevations, stereoscopic observations from the Multi-angle Imaging SpectroRadiometer (MISR) can discriminate clouds from snow and ice. The central portion of Russia's East Siberian Sea, including one of the New Siberian Islands, Novaya Sibir, are portrayed in these views from data acquired on May 28, 2002.The left-hand image is a natural color view from MISR's nadir camera. On the right is a height field retrieved using automated computer processing of data from multiple MISR cameras. Although both clouds and ice appear white in the natural color view, the stereoscopic retrievals are able to identify elevated clouds based on the geometric parallax which results when they are observed from different angles. Owing to their elevation above sea level, clouds are mapped as green and yellow areas, whereas land, sea ice, and very low clouds appear blue and purple. Purple, in particular, denotes elevations very close to sea level. The island of Novaya Sibir is located in the lower left of the images. It can be identified in the natural color view as the dark area surrounded by an expanse of fast ice. In the stereo map the island appears as a blue region indicating its elevation of less than 100 meters above sea level. Areas where the automated stereo processing failed due to lack of sufficient spatial contrast are shown in dark gray. The northern edge of the Siberian mainland can be found at the very bottom of the panels, and is located a little over 250 kilometers south of Novaya Sibir. Pack ice containing numerous fragmented ice floes surrounds the fast ice, and narrow areas of open ocean are visible.The East Siberian Sea is part of the Arctic Ocean and is ice-covered most of the year. The New Siberian Islands are almost always covered by snow and ice, and tundra vegetation is very scant. Despite continuous sunlight from the end of April until the middle of August, the ice between the island and the mainland typically remains until August or September.The Multi-angle Imaging SpectroRadiometer views almost the entire Earth every 9 days. These images were acquired during Terra orbit 12986 and cover an area of about 380 kilometers x 1117 kilometers. They utilize data from blocks 24 to 32 within World Reference System-2 path 117.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...
7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
3. VAL CONTROL STATION, VIEW OF CONTROL PANELS SHOWING MAIN ...
3. VAL CONTROL STATION, VIEW OF CONTROL PANELS SHOWING MAIN PRESSURE GAUGES, LOOKING NORTH. - Variable Angle Launcher Complex, Control Station, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Regional Changes in Earths Color and Texture as Observed From Space Over a 15-Year Period
NASA Technical Reports Server (NTRS)
Zhao, Guangyu; Di Girolamo, Larry; Diner, David J.; Bruegge, Carol J.; Mueller, Kevin J.; Wu, Dong L.
2016-01-01
Earth-observing satellites provide global observations of many geophysical variables. As these variables are derived from measured radiances, the underlying radiance data are the most reliable sources of information for change detection. Here, we identify statistically significant trends in the color and spatial texture of the Earth as viewed from multiple directions from the Multi-angle Imaging SpectroRadiometer (MISR), which has been sampling the angular distribution of scattered sunlight since 2000. Globally, our results show that the Earth has been appearing relatively bluer (up to 1.6 % per decade from both nadir and oblique views) and smoother (up to 1.5 % per decade only from oblique views) over the past 15 years. The magnitude of the global blueing trends is comparable to that of uncertainties in radiometric calibration stability. Regional shifts in color and texture, which are significantly larger than global means, are observed, particularly over polar regions, along the boundaries of the subtropical highs, the tropical western Pacific, Southwestern Asia, and Australia. We demonstrate that the large regional trends cannot be explained either by uncertainties in radiometric calibration or variability in total or spectral solar irradiance; hence, they reflect changes internal to the Earths climate system. The 15-year-mean true color composites and texture images of the Earth at both nadir and oblique views are also presented for the first time.
Arabi, Hossein; Kamali Asl, Ali Reza; Ay, Mohammad Reza; Zaidi, Habib
2015-07-01
The purpose of this work is to evaluate the impact of optimization of magnification on performance parameters of the variable resolution X-ray (VRX) CT scanner. A realistic model based on an actual VRX CT scanner was implemented in the GATE Monte Carlo simulation platform. To evaluate the influence of system magnification, spatial resolution, field-of-view (FOV) and scatter-to-primary ratio of the scanner were estimated for both fixed and optimum object magnification at each detector rotation angle. Comparison and inference between these performance parameters were performed angle by angle to determine appropriate object position at each opening half angle. Optimization of magnification resulted in a trade-off between spatial resolution and FOV of the scanner at opening half angles of 90°-12°, where the spatial resolution increased up to 50% and the scatter-to-primary ratio decreased from 4.8% to 3.8% at a detector angle of about 90° for the same FOV and X-ray energy spectrum. The disadvantage of magnification optimization at these angles is the significant reduction of the FOV (up to 50%). Moreover, magnification optimization was definitely beneficial for opening half angles below 12° improving the spatial resolution from 7.5 cy/mm to 20 cy/mm. Meanwhile, the FOV increased by more than 50% at these angles. It can be concluded that optimization of magnification is essential for opening half angles below 12°. For opening half angles between 90° and 12°, the VRX CT scanner magnification should be set according to the desired spatial resolution and FOV. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Measuring the Radius of the Earth from a Mountain Top Overlooking the Ocean
ERIC Educational Resources Information Center
Gangadharan, Dhevan
2009-01-01
A clear view of the ocean may be used to measure the radius of the Earth. To an observer looking out at the ocean, the horizon will always form some angle [theta] with the local horizontal plane. As the observer's elevation "h" increases, so does the angle [theta]. From measurements of the elevation "h" and the angle [theta],…
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Gregg, Watson W.
1992-01-01
Due to range safety considerations, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) ocean color instrument may be required to be launched into a near-noon descending node, as opposed to the ascending node used by the predecessor sensor, the Coastal Zone Color Scanner (CZCS). The relative importance of ascending versus descending near-noon orbits was assessed here to determine if descending node will meet the scientific requirements of SeaWiFS. Analyses focused on ground coverage, local times of coverage, solar and viewing geometries (zenith and azimuth angles), and sun glint. Differences were found in the areas covered by individual orbits, but were not important when taken over a 16 day repeat time. Local time of coverage was also different: for ascending node orbits the Northern Hemisphere was observed in the morning and the Southern Hemisphere in the afternoon, while for descending node orbits the Northern Hemisphere was observed in the afternoon and the Southern in the morning. There were substantial differences in solar azimuth and spacecraft azimuth angles both at equinox and at the Northern Hemisphere summer solstice. Negligible differences in solar and spacecraft zenith angles, relative azimuth angles, and sun glint were obtained at the equinox. However, large differences were found in solar zenith angles, relative azimuths, and sun glint for the solstice. These differences appeared to compensate across the scan, however, an increase in sun glint in descending node over that in ascending node on the western part of the scan was compensated by a decrease on the eastern part of the scan. Thus, no advantage or disadvantage could be conferred upon either ascending node or descending node for noon orbits. Analyses were also performed for ascending and descending node orbits that deviated from a noon equator crossing time. For ascending node, afternoon orbits produced the lowest mean solar zenith angles in the Northern Hemisphere, and morning orbits produced the lowest angles for the Southern Hemisphere. For descending node, morning orbits produced the lowest mean solar zenith angles for the Northern Hemisphere; afternoon orbits produced the lowest angles for the Southern Hemisphere.
NASA Technical Reports Server (NTRS)
2002-01-01
One of the more destructive cyclones to emerge from the northern hemisphere 2002 summer storm season was Typhoon Sinlaku. Several attributes of this storm event are portrayed in these data products from the Multi-angle Imaging SpectroRadiometer. The images were acquired on September 5, when the western portion of the storm was situated over the Okinawan island chain. Over the next few days it moved west-northwest, sweeping over Taiwan before making landfall along China's Zhejian province on the 7th. The typhoon forced hundreds of thousands of people from their homes, caused major power outages, and at least 26 people were reported dead or missing before the storm weakened as it moved inland.While the nature and formation of individual storm events is relatively well understood, the influence of clouds on climate is difficult to assess due to the variable nature of cloud cover at various altitudes. MISR's data products are designed to help understand these influences. Typhoon Sinlaku is shown at left as a natural-color view observed by MISR's vertical-viewing (nadir)camera. The center panel shows the cloud-top height field derived using automated stereoscopic processing of data from multiple MISR cameras. Relative height variations, such as the clearing within the storm's eye, are well represented. Areas where heights could not be retrieved are shown in dark gray.Clouds have a significant influence on the global radiation balance of the Earth's atmosphere, and the improvement of climate models requires more accurate information on how different types of clouds influence Earth's energy budget. One measure of this influence is albedo, which is the amount of sunlight reflected back to space divided by amount of incident sunlight. Bright objects have high albedo. Retrieved local albedo values for Typhoon Sinlaku are shown at right. Generation of this product is dependent on observed cloud radiances as a function of viewing angle and the cloud height field. Over the short distances (2.2 kilometers) that MISR's local albedo product is generated, values can be greater than 1.0 due to the contributions from the sides of the clouds. Areas where albedo could not be retrieved are shown in dark gray.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. This image is a portion of the data acquired during Terra orbit 14442, and covers an area of about 380 kilometers x 1408 kilometers. It utilizes data from blocks 65 to 74 within World Reference System-2 path 113.High angle view of Apollo 14 space vehicle on way to Pad A
1970-11-09
S70-54127 (9 Nov. 1970) --- A high-angle view at Launch Complex 39, Kennedy Space Center (KSC), showing the Apollo 14 (Spacecraft 110/Lunar Module 8/Saturn 509) space vehicle on the way from the Vehicle Assembly Building (VAB) to Pad A. The Saturn V stack and its mobile launch tower sit atop a huge crawler-transporter. The Apollo 14 crewmen will be astronauts Alan B. Shepard Jr., commander; Stuart A. Roosa, command module pilot; and Edgar D. Mitchell, lunar module pilot.
High angle view of Apollo 14 space vehicle on way to Pad A
1970-11-09
S70-54119 (9 Nov. 1970) --- A high-angle view at Launch Complex 39, Kennedy Space Center (KSC), showing the Apollo 14 (Spacecraft 110/Lunar Module 8/Saturn 509) space vehicle on the way from the Vehicle Assembly Building (VAB) to Pad A. The Saturn V stack and its mobile launch tower sit atop a huge crawler-transporter. The Apollo 14 crewmen will be astronauts Alan B. Shepard Jr., commander; Stuart A. Roosa, command module pilot; and Edgar D. Mitchell, lunar module pilot.
Inventory and monitoring of natural vegetation and related resources in an arid environment
NASA Technical Reports Server (NTRS)
Schrumpf, B. J. (Principal Investigator); Johnson, J. R.; Mouat, D. A.
1973-01-01
The author has identified the following significant results. A vegetation classification has been established for the test site (approx. 8300 sq km); 31 types are recognized. Some relationships existing among vegetation types and associated terrain features have been characterized. Terrain features can be used to discriminate vegetation types. Macrorelief interpretations on ERTS-1 imagery can be performed with greater accuracy when using high sun angle stereoscopic viewing rather than low sun angle monoscopic viewing. Some plant phenological changes are being recorded by the MSS system.
Scheduling Randomly-Deployed Heterogeneous Video Sensor Nodes for Reduced Intrusion Detection Time
NASA Astrophysics Data System (ADS)
Pham, Congduc
This paper proposes to use video sensor nodes to provide an efficient intrusion detection system. We use a scheduling mechanism that takes into account the criticality of the surveillance application and present a performance study of various cover set construction strategies that take into account cameras with heterogeneous angle of view and those with very small angle of view. We show by simulation how a dynamic criticality management scheme can provide fast event detection for mission-critical surveillance applications by increasing the network lifetime and providing low stealth time of intrusions.
Leaf bidirectional reflectance and transmittance in corn and soybean
NASA Technical Reports Server (NTRS)
Walter-Shea, E. A.; Norman, J. M.; Blad, B. L.
1989-01-01
Bidirectional optical properties of leaves must be adequately characterized to develop comprehensive and reliably predictive canopy radiative-transfer models. Directional reflectance and transmittance factors of individual corn and soybean leaves were measured at source incidence angles (SIAs) 20, 45, and 70 deg and numerous view angles in the visible and NIR. Bidirectional reflectance distributions changed with increasing SIA, with forward scattering most pronounced at 70 deg. Directional-hemispherical reflectance generally increased and transmittance decreased with increased SIA. Directional-hemispherical reflectance factors were higher and transmittances were lower than the nadir-viewed reflectance component.
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2018-03-19
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.
Polarized bow shocks reveal features of the winds and environments of massive stars
NASA Astrophysics Data System (ADS)
Shrestha, Manisha
2018-01-01
Massive stars strongly affect their surroundings through their energetic stellar winds and deaths as supernovae. The bow shock structures created by fast-moving massive stars contain important information about the winds and ultimate fates of these stars as well as their local interstellar medium (ISM). Since bow shocks are aspherical, the light scattered in the dense shock material becomes polarized. Analyzing this polarization reveals details of the bow shock geometry as well as the composition, velocity, density, and albedo of the scattering material. With these quantities, we can constrain the properties of the stellar wind and thus the evolutionary state of the star, as well as the dust composition of the local ISM.In my dissertation research, I use a Monte Carlo radiative transfer code that I optimized to simulate the polarization signatures produced by both resolved and unresolved stellar wind bow shocks (SWBS) illuminated by a central star and by shock emission. I derive bow shock shapes and densities from published analytical calculations and smooth particle hydrodynamic (SPH) models. In the case of the analytical SWBS and electron scattering, I find that higher optical depths produce higher polarization and position angle rotations at specific viewing angles compared to theoretical predictions for low optical depths. This is due to the geometrical properties of the bow shock combined with multiple scattering effects. For dust scattering, the polarization signature is strongly affected by wavelength, dust grain properties, and viewing angle. The behavior of the polarization as a function of wavelength in these cases can distinguish among different dust models for the local ISM. In the case of SPH density structures, I investigate how the polarization changes as a function of the evolutionary phase of the SWBS. My dissertation compares these simulations with polarization data from Betelgeuse and other massive stars with bow shocks. I discuss the implications of these model for the stellar winds and interstellar environments of these influential objects.
Tropical Cyclone Monty Strikes Western Australia
NASA Technical Reports Server (NTRS)
2004-01-01
The Multi-angle Imaging SpectroRadiometer (MISR) acquired these natural color images and cloud top height measurements for Monty before and after the storm made landfall over the remote Pilbara region of Western Australia, on February 29 and March 2, 2004 (shown as the left and right-hand image sets, respectively). On February 29, Monty was upgraded to category 4 cyclone status. After traveling inland about 300 kilometers to the south, the cyclonic circulation had decayed considerably, although category 3 force winds were reported on the ground. Some parts of the drought-affected Pilbara region received more than 300 millimeters of rainfall, and serious and extensive flooding has occurred. The natural color images cover much of the same area, although the right-hand panels are offset slightly to the east. Automated stereoscopic processing of data from multiple MISR cameras was utilized to produce the cloud-top height fields. The distinctive spatial patterns of the clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. The height retrievals are at this stage uncorrected for the effects of the high winds associated with cyclone rotation. Areas where heights could not be retrieved are shown in dark gray. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 22335 and 22364. The panels cover an area of about 380 kilometers x 985 kilometers, and utilize data from blocks 105 to 111 within World Reference System-2 paths 115 and 113. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.NASA Astrophysics Data System (ADS)
Xu, F.; Dubovik, O.; Zhai, P.; Kalashnikova, O. V.; Diner, D. J.
2015-12-01
The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) [1] has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI typically acquires observations of a target area at 9 view angles between ±67° off the nadir. Its spectral channels are centered at 355, 380, 445, 470*, 555, 660*, and 865* nm, where the asterisk denotes the polarimetric bands. In order to retrieve information from the AirMSPI observations, we developed a efficient and flexible retrieval code that can jointly retrieve aerosol and water leaving radiance simultaneously. The forward model employs a coupled Markov Chain (MC) [2] and adding/doubling [3] radiative transfer method which is fully linearized and integrated with a multi-patch retrieval algorithm to obtain aerosol and water leaving radiance/Chl-a information. Various constraints are imposed to improve convergence and retrieval stability. We tested the aerosol and water leaving radiance retrievals using the AirMSPI radiance and polarization measurements by comparing to the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentration to the values reported by the USC SeaPRISM AERONET-OC site off the coast of Southern California. In addition, the MC-based retrievals of aerosol properties were compared with GRASP ([4-5]) retrievals for selected cases. The MC-based retrieval approach was then used to systematically explore the benefits of AirMSPI's ultraviolet and polarimetric channels, the use of multiple view angles, and constraints provided by inclusion of bio-optical models of the water-leaving radiance. References [1]. D. J. Diner, et al. Atmos. Meas. Tech. 6, 1717 (2013). [2]. F. Xu et al. Opt. Lett. 36, 2083 (2011). [3]. J. E. Hansen and L.D. Travis. Space Sci. Rev. 16, 527 (1974). [4]. O. Dubovik et al. Atmos. Meas. Tech., 4, 975 (2011). [5]. O. Dubovik et al. SPIE: Newsroom, DOI:10.1117/2.1201408.005558 (2014).
Multi-target detection and positioning in crowds using multiple camera surveillance
NASA Astrophysics Data System (ADS)
Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng
2018-04-01
In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.
2. OBLIQUE VIEW OF WEST FRONT. The frames on an ...
2. OBLIQUE VIEW OF WEST FRONT. The frames on an angle originally held mirrors for viewing the tests from inside the building. Vertical frame originally held bullet glass. - Edwards Air Force Base, South Base Sled Track, Firing Control Blockhouse, South of Sled Track at east end, Lancaster, Los Angeles County, CA
Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Suttles, J. T.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.
Color image generation for screen-scanning holographic display.
Takaki, Yasuhiro; Matsumoto, Yuji; Nakajima, Tatsumi
2015-10-19
Horizontally scanning holography using a microelectromechanical system spatial light modulator (MEMS-SLM) can provide reconstructed images with an enlarged screen size and an increased viewing zone angle. Herein, we propose techniques to enable color image generation for a screen-scanning display system employing a single MEMS-SLM. Higher-order diffraction components generated by the MEMS-SLM for R, G, and B laser lights were coupled by providing proper illumination angles on the MEMS-SLM for each color. An error diffusion technique to binarize the hologram patterns was developed, in which the error diffusion directions were determined for each color. Color reconstructed images with a screen size of 6.2 in. and a viewing zone angle of 10.2° were generated at a frame rate of 30 Hz.
8. VAL CAMERA CAR, CLOSEUP VIEW OF 'FLARE' OR TRAJECTORY ...
8. VAL CAMERA CAR, CLOSE-UP VIEW OF 'FLARE' OR TRAJECTORY CAMERA ON SLIDING MOUNT. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
18. DETAIL VIEW OF DEVICE ON OUTSIDE OF COFFEE HUSKER ...
18. DETAIL VIEW OF DEVICE ON OUTSIDE OF COFFEE HUSKER THAT ADJUSTED ANGLE OF HUSKER VAT WALLS - Hacienda Cafetalera Santa Clara, Coffee Mill, KM 19, PR Route 372, Hacienda La Juanita, Yauco Municipio, PR
2. VAL CONTROL STATION, VIEW OF INTERIOR SHOWING EXTERIOR DOOR, ...
2. VAL CONTROL STATION, VIEW OF INTERIOR SHOWING EXTERIOR DOOR, WINDOWS AND CONTROL PANELS, LOOKING SOUTHEAST. - Variable Angle Launcher Complex, Control Station, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
A radiosity model for heterogeneous canopies in remote sensing
NASA Astrophysics Data System (ADS)
GarcíA-Haro, F. J.; Gilabert, M. A.; Meliá, J.
1999-05-01
A radiosity model has been developed to compute bidirectional reflectance from a heterogeneous canopy approximated by an arbitrary configuration of plants or clumps of vegetation, placed on the ground surface in a prescribed manner. Plants are treated as porous cylinders formed by aggregations of layers of leaves. This model explicitly computes solar radiation leaving each individual surface, taking into account multiple scattering processes between leaves and soil, and occlusion of neighboring plants. Canopy structural parameters adopted in this study have served to simplify the computation of the geometric factors of the radiosity equation, and thus this model has enabled us to simulate multispectral images of vegetation scenes. Simulated images have shown to be valuable approximations of satellite data, and then a sensitivity analysis to the dominant parameters of discontinuous canopies (plant density, leaf area index (LAI), leaf angle distribution (LAD), plant dimensions, soil optical properties, etc.) and scene (sun/ view angles and atmospheric conditions) has been undertaken. The radiosity model has let us gain a deep insight into the radiative regime inside the canopy, showing it to be governed by occlusion of incoming irradiance, multiple scattering of radiation between canopy elements and interception of upward radiance by leaves. Results have indicated that unlike leaf distribution, other structural parameters such as LAI, LAD, and plant dimensions have a strong influence on canopy reflectance. In addition, concepts have been developed that are useful to understand the reflectance behavior of the canopy, such as an effective LAI related to leaf inclination.
Portable LED-induced autofluorescence imager with a probe of L shape for oral cancer diagnosis
NASA Astrophysics Data System (ADS)
Huang, Ting-Wei; Lee, Yu-Cheng; Cheng, Nai-Lun; Yan, Yung-Jhe; Chiang, Hou-Chi; Chiou, Jin-Chern; Mang, Ou-Yang
2015-08-01
The difference of spectral distribution between lesions of epithelial cells and normal cells after excited fluorescence is one of methods for the cancer diagnosis. In our previous work, we developed a portable LED Induced autofluorescence (LIAF) imager contained the multiple wavelength of LED excitation light and multiple filters to capture ex-vivo oral tissue autofluorescence images. Our portable system for detection of oral cancer has a probe in front of the lens for fixing the object distance. The shape of the probe is cone, and it is not convenient for doctor to capture the oral image under an appropriate view angle in front of the probe. Therefore, a probe of L shape containing a mirror is proposed for doctors to capture the images with the right angles, and the subjects do not need to open their mouse constrainedly. Besides, a glass plate is placed in probe to prevent the liquid entering in the body, but the light reflected from the glass plate directly causes the light spots inside the images. We set the glass plate in front of LED to avoiding the light spots. When the distance between the glasses plate and the LED model plane is less than the critical value, then we can prevent the light spots caused from the glasses plate. The experiments show that the image captured with the new probe that the glasses plate placed in the back-end of the probe has no light spots inside the image.
Ground-based full-sky imaging polarimeter based on liquid crystal variable retarders.
Zhang, Ying; Zhao, Huijie; Song, Ping; Shi, Shaoguang; Xu, Wujian; Liang, Xiao
2014-04-07
A ground-based full-sky imaging polarimeter based on liquid crystal variable retarders (LCVRs) is proposed in this paper. Our proposed method can be used to realize the rapid detection of the skylight polarization information with hemisphere field-of-view for the visual band. The characteristics of the incidence angle of light on the LCVR are investigated, based on the electrically controlled birefringence. Then, the imaging polarimeter with hemisphere field-of-view is designed. Furthermore, the polarization calibration method with the field-of-view multiplexing and piecewise linear fitting is proposed, based on the rotation symmetry of the polarimeter. The polarization calibration of the polarimeter is implemented with the hemisphere field-of-view. This imaging polarimeter is investigated by the experiment of detecting the skylight image. The consistency between the obtained experimental distribution of polarization angle with that due to Rayleigh scattering model is 90%, which confirms the effectivity of our proposed imaging polarimeter.
Flow visualization and characterization of evaporating liquid drops
NASA Technical Reports Server (NTRS)
Chao, David F. (Inventor); Zhang, Nengli (Inventor)
2004-01-01
An optical system, consisting of drop-reflection image, reflection-refracted shadowgraphy and top-view photography, is used to measure the spreading and instant dynamic contact angle of a volatile-liquid drop on a non-transparent substrate. The drop-reflection image and the shadowgraphy is shown by projecting the images of a collimated laser beam partially reflected by the drop and partially passing through the drop onto a screen while the top view photograph is separately viewed by use of a camera video recorder and monitor. For a transparent liquid on a reflective solid surface, thermocapillary convection in the drop, induced by evaporation, can be viewed nonintrusively, and the drop real-time profile data are synchronously recorded by video recording systems. Experimental results obtained from this technique clearly reveal that evaporation and thermocapillary convection greatly affect the spreading process and the characteristics of dynamic contact angle of the drop.
NASA Astrophysics Data System (ADS)
Penning de Vries, Marloes; Beirle, Steffen; Sihler, Holger; Wagner, Thomas
2017-04-01
The UV Aerosol Index (UVAI) is a simple measure of aerosols from satellite that is particularly sensitive to elevated layers of absorbing particles. It has been determined from a range of instruments including TOMS, GOME-2, and OMI, for almost four decades and will be continued in the upcoming Sentinel missions S5-precursor, S4, and S5. Despite its apparent simplicity, the interpretation of UVAI is not straightforward, as it depends on aerosol abundance, absorption, and altitude in a non-linear way. In addition, UVAI depends on the geometry of the measurement (viewing angle, solar zenith and relative azimuth angles), particularly if viewing angles exceed 45 degrees, as is the case for OMI and TROPOMI (on S5-precursor). The dependence on scattering angle complicates the interpretation and further processing (e.g., averaging) of UVAI. In certain favorable cases, however, independent information on aerosol altitude and absorption may become available. We present a detailed study of the scatter angle dependence using SCIATRAN radiative transfer calculations. The model results were compared to observations of an extensive Siberian smoke plume, of which parts reached 10-12 km altitude. Due to its large extent and the high latitude, OMI observed the complete plume in five consecutive orbits under a wide range of scattering angles. This allowed us to deduce aerosol characteristics (absorption and layer height) that were compared with collocated CALIOP lidar measurements.
A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.
Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G
2013-03-21
Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.
Flow Behavior in Side-View Plane of Pitching Delta Wing
NASA Astrophysics Data System (ADS)
Pektas, Mehmet Can; Tasci, Mehmet Oguz; Karasu, Ilyas; Sahin, Besir; Akilli, Huseyin
2018-06-01
In the present investigation, a delta wing which has 70° sweep angle, Λ was oscillated on its midcord according to the equation of α(t)=αm+α0sin(ωet). This study focused on understanding the effect of pitching and characterizing the interaction of vortex breakdown with oscillating leading edges under different yaw angles, β over a slender delta wing. The value of mean angle of attack, αm was taken as 25°. The yaw angle, β was varied with an interval of 4° over the range of 0°≤β≤ 16°. The delta wing was sinusoidally pitched within the range of period of time 5s≤Te≤60s and reduced frequency was set as K=0.16, 0.25, 0.49, 1.96 and lastly amplitude of pitching motion was arranged as α0=±5°.Formations and locations of vortex breakdown were investigated by using the dye visualization technique in side view plane.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Pair Production and Gamma-Ray Emission in the Outer Magnetospheres of Rapidly Spinning Young Pulsars
NASA Technical Reports Server (NTRS)
Ruderman, Malvin; Chen, Kaiyou
1997-01-01
Electron-positron pair production and acceleration in the outer magnetosphere may be crucial for a young rapidly spinning canonical pulsar to be a strong Gamma-ray emitter. Collision between curvature radiated GeV photons and soft X-ray photons seems to be the only efficient pair production mechanism. For Crib-like pulsars, the magnetic field near the light cylinder is so strong, such that the synchrotron radiation of secondary pairs will be in the needed X-ray range. However, for majority of the known Gamma-ray pulsars, surface emitted X-rays seem to work as the matches and fuels for a gamma-ray generation fireball in the outer magnetosphere. The needed X-rays could come from thermal emission of a cooling neutron star or could be the heat generated by bombardment of the polar cap by energetic particles generated in the outer magnetosphere. With detection of more Gamma-ray pulsars, it is becoming evident that the neutron star's intrisic geometry (the inclination angle between the rotation and magnetic axes) and observational geometry (the viewing angle with respect to the rotation axis) are crucial to the understanding of varieties of observational properties exhibited by these pulsars. Inclination angles for many known high energy Gamma-ray pulsars appear to be large and the distribution seems to be consistent with random orientation. However, all of them except Geminga are pre-selected from known radio pulsars. The viewing angles are thus limited to be around the respective inclination angles for beamed radio emission, which may induce strong selection effect. The viewing angles as well as the inclination angles of PSR 1509-58 and PSB 0656+14 may be small such that most of the high energy Gamma-rays produced in the outer accelerators may not reach the observer's direction. The observed Gamma-rays below 5 MeV from this pulsar may be synchrotron radiation of secondary electron-positron pairs produced outside the accelerating regions.
Wang, Shijun; McKenna, Matthew T; Nguyen, Tan B; Burns, Joseph E; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M
2012-05-01
In this paper, we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3-D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods.
Wang, Shijun; McKenna, Matthew T.; Nguyen, Tan B.; Burns, Joseph E.; Petrick, Nicholas; Sahiner, Berkman
2012-01-01
In this paper we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods. PMID:22552333
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
Spectral bidirectional reflectance of Antarctic snow: Measurements and parameterization
NASA Astrophysics Data System (ADS)
Hudson, Stephen R.; Warren, Stephen G.; Brandt, Richard E.; Grenfell, Thomas C.; Six, Delphine
2006-09-01
The bidirectional reflectance distribution function (BRDF) of snow was measured from a 32-m tower at Dome C, at latitude 75°S on the East Antarctic Plateau. These measurements were made at 96 solar zenith angles between 51° and 87° and cover wavelengths 350-2400 nm, with 3- to 30-nm resolution, over the full range of viewing geometry. The BRDF at 900 nm had previously been measured at the South Pole; the Dome C measurement at that wavelength is similar. At both locations the natural roughness of the snow surface causes the anisotropy of the BRDF to be less than that of flat snow. The inherent BRDF of the snow is nearly constant in the high-albedo part of the spectrum (350-900 nm), but the angular distribution of reflected radiance becomes more isotropic at the shorter wavelengths because of atmospheric Rayleigh scattering. Parameterizations were developed for the anisotropic reflectance factor using a small number of empirical orthogonal functions. Because the reflectance is more anisotropic at wavelengths at which ice is more absorptive, albedo rather than wavelength is used as a predictor in the near infrared. The parameterizations cover nearly all viewing angles and are applicable to the high parts of the Antarctic Plateau that have small surface roughness and, at viewing zenith angles less than 55°, elsewhere on the plateau, where larger surface roughness affects the BRDF at larger viewing angles. The root-mean-squared error of the parameterized reflectances is between 2% and 4% at wavelengths less than 1400 nm and between 5% and 8% at longer wavelengths.
Smoke from Fires in Southern Mexico
NASA Technical Reports Server (NTRS)
2002-01-01
On May 2, 2002, numerous fires in southern Mexico sent smoke drifting northward over the Gulf of Mexico. These views from the Multi-angle Imaging SpectroRadiometer illustrate the smoke extent over parts of the Gulf and the southern Mexican states of Tabasco, Campeche and Chiapas. At the same time, dozens of other fires were also burning in the Yucatan Peninsula and across Central America. A similar situation occurred in May and June of 1998, when Central American fires resulted in air quality warnings for several U.S. States.The image on the left is a natural color view acquired by MISR's vertical-viewing (nadir) camera. Smoke is visible, but sunglint in some ocean areas makes detection difficult. The middle image, on the other hand, is a natural color view acquired by MISR's 70-degree backward-viewing camera; its oblique view angle simultaneously suppresses sunglint and enhances the smoke. A map of aerosol optical depth, a measurement of the abundance of atmospheric particulates, is provided on the right. This quantity is retrieved using an automated computer algorithm that takes advantage of MISR's multi-angle capability. Areas where no retrieval occurred are shown in black.The images each represent an area of about 380 kilometers x 1550 kilometers and were captured during Terra orbit 12616.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Airborne system for multispectral, multiangle polarimetric imaging.
Bowles, Jeffrey H; Korwan, Daniel R; Montes, Marcos J; Gray, Deric J; Gillis, David B; Lamela, Gia M; Miller, W David
2015-11-01
In this paper, we describe the design, fabrication, calibration, and deployment of an airborne multispectral polarimetric imager. The motivation for the development of this instrument was to explore its ability to provide information about water constituents, such as particle size and type. The instrument is based on four 16 MP cameras and uses wire grid polarizers (aligned at 0°, 45°, 90°, and 135°) to provide the separation of the polarization states. A five-position filter wheel provides for four narrow-band spectral filters (435, 550, 625, and 750 nm) and one blocked position for dark-level measurements. When flown, the instrument is mounted on a programmable stage that provides control of the view angles. View angles that range to ±65° from the nadir have been used. Data processing provides a measure of the polarimetric signature as a function of both the view zenith and view azimuth angles. As a validation of our initial results, we compare our measurements, over water, with the output of a Monte Carlo code, both of which show neutral points off the principle plane. The locations of the calculated and measured neutral points are compared. The random error level in the measured degree of linear polarization (8% at 435) is shown to be better than 0.25%.
Park, Ju Yong; Hwang, Se Won; Hwang, Kun
2013-11-01
The aim of this study was to compare the painting portraits of beautiful women, femme fatales, and artists' mothers using anthropometry.Portraits of each theme were selected in modern novels, essays and picture books, and categorized portraits. A total of 52 samples were collected, including 20 beautiful women, 20 femme fatales, and 12 artists' mothers. In 5 persons, 17 anthropometric ratios including the alae-alae/zygion-zygion ratio were compared in a 15-degree oblique view and in anteroposterior view photographs, and they were proved to not differ significantly. To distinguish oblique portraits less than 15 degrees, we measured the exocanthion-stomion-exocanthion (ESE) angle in photographs of 5 volunteers. The mean ± SD of the ESE angle was 64.52 ± 4.87 in the 15-degree angle view and 57.68 ± 54.09 in the 30-degree angle view. Thereafter, if the ESE angle was greater than 65 degrees, we considered the portrait to have less than a 15-degree angle and included it in the samples.The ratio did not differ significantly in 11 anthropometric proportions. However, the remaining 5 proportions were statistically significant. Beautiful women had wider noses (85% of the endocanthion-endocanthion width) than those of the femme fatale group (77%). Lips in the beautiful woman group are nicer and thicker (36% of lip's width) compared with the artists' mother group (27%). Femme fatales were relatively similar to beautiful women such as those women with nice and thick lips. However, the femme fatale group had an attractive midface ratio (36% of the total face height) that has been mentioned in the older literature, and the noses of the femme fatale group were narrower and sharper (77% of the endocanthion-endocanthion width) than those of the beautiful women (85%). The artists' mother group has a relatively narrower upper face (29% of the total face height) and thinner lips (27% of the lip width) compared with the other 2 groups (36%).Proportions from works of art are more ideal and attractive than clinically measured proportions. The ideal ratios measured from historical portraits might be useful in planning facial surgeries.
3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...
3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Atmospheric Science Data Center
2014-05-15
... the Multi-angle Imaging SpectroRadiometer (MISR). On the left, a natural-color view acquired by MISR's vertical-viewing (nadir) camera ... Gunnison River at the city of Grand Junction. The striking "L" shaped feature in the lower image center is a sandstone monocline known as ...
A study of X-ray multiple diffraction by means of section topography.
Kohn, V G; Smirnova, I A
2015-09-01
The results of theoretical and experimental study are presented for the question of how the X-ray multiple diffraction in a silicon single crystal influences the interference fringes of section topography for the 400 reflection in the Laue case. Two different cases of multiple diffraction are discovered for zero and very small values of the azimuthal angle for the sample in the form of a plate with the surface normal to the 001 direction. The cases are seen on the same topogram without rotation of the crystal. Accurate computer simulations of the section topogram for the case of X-ray multiple diffraction are performed for the first time. It is shown that the structure of interference fringes on the section topogram in the region of multiple diffraction becomes more complicated. It has a very sharp dependence on the azimuthal angle. The experiment is carried out using a laboratory source under conditions of low resolution over the azimuthal angle. Nevertheless, the characteristic inclination of the interference fringes on the tails of the multiple diffraction region is easily seen. This phenomenon corresponds completely to the computer simulations.
A cylindrical specimen holder for electron cryo-tomography
Palmer, Colin M.; Löwe, Jan
2014-01-01
The use of slab-like flat specimens for electron cryo-tomography restricts the range of viewing angles that can be used. This leads to the “missing wedge” problem, which causes artefacts and anisotropic resolution in reconstructed tomograms. Cylindrical specimens provide a way to eliminate the problem, since they allow imaging from a full range of viewing angles around the tilt axis. Such specimens have been used before for tomography of radiation-insensitive samples at room temperature, but never for frozen-hydrated specimens. Here, we demonstrate the use of thin-walled carbon tubes as specimen holders, allowing the preparation of cylindrical frozen-hydrated samples of ribosomes, liposomes and whole bacterial cells. Images acquired from these cylinders have equal quality at all viewing angles, and the accessible tilt range is restricted only by the physical limits of the microscope. Tomographic reconstructions of these specimens demonstrate that the effects of the missing wedge are substantially reduced, and could be completely eliminated if a full tilt range was used. The overall quality of these tomograms is still lower than that obtained by existing methods, but improvements are likely in future. PMID:24275523
Fougnie, B; Frouin, R; Lecomte, P; Deschamps, P Y
1999-06-20
Reflected skylight in above-water measurements of diffuse marine reflectance can be reduced substantially by viewing the surface through an analyzer transmitting the vertically polarized component of incident radiance. For maximum reduction of effects, radiometric measurements should be made at a viewing zenith angle of approximately 45 degrees (near the Brewster angle) and a relative azimuth angle between solar and viewing directions greater than 90 degrees (backscattering), preferably 135 degrees. In this case the residual reflected skylight in the polarized signal exhibits minimum sensitivity to the sea state and can be corrected to within a few 10(-4) in reflectance units. For most oceanic waters the resulting relative error on the diffuse marine reflectance in the blue and green is less than 1%. Since the water body polarizes incident skylight, the measured polarized reflectance differs from the total reflectance. The difference, however, is small for the considered geometry. Measurements made at the Scripps Institution of Oceanography pier in La Jolla, Calif., with a specifically designed scanning polarization radiometer, confirm the theoretical findings and demonstrate the usefulness of polarization radiometry for measuring diffuse marine reflectance.
He, Xing; Li, Hua; Shao, Yan; Shi, Bing
2015-01-01
The purpose of this study is to ascertain objective nasal measurements from the basal view that are predictive of nasal esthetics in individuals with secondary cleft nasal deformity. Thirty-three patients who had undergone unilateral cleft lip repair were retrospectively reviewed in this study. The degree of nasal deformity was subjectively ranked by seven surgeons using standardized basal-view measurements. Nine physical objective parameters including angles and ratios were measured. Correlations and regressions between these objective and subjective measurements were then analyzed. There was high concordance in subjective measurements by different surgeons (Kendall's harmonious coefficient = W = .825, P = .006). The strongest predictive factors for nasal aesthetics were the ratio of length of nasal alar (r = .370, P = .034) and the degree of deviation of the columnar axis (r = .451, P = .008). The columellar angle had a more powerful effect in rating nasal esthetics. There was reliable concordance in subjective ranking of nasal esthetics by surgeons. Measurement of the columnar angle may serve as an independent, objective predictor of esthetics of the nose.
View of the launch of STS 51-A shuttle Discovery
NASA Technical Reports Server (NTRS)
1984-01-01
View across the water of the launch of STS 51-A shuttle Discovery. The orbiter is just clearing the launch pad (90032); closer view of the Shuttle Discovery just clearing the launch pad. Photo was taken from across the river, with trees and shrubs forming the bottom edge of the view (90033); Low angle view of the rapidly climbing Discovery, still attached to its two solid rocket boosters and an external fuel tank (90034).
Detection of Multiple Stationary Humans Using UWB MIMO Radar.
Liang, Fulai; Qi, Fugui; An, Qiang; Lv, Hao; Chen, Fuming; Li, Zhao; Wang, Jianqi
2016-11-16
Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect), detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB) multiple-input and multiple-output (MIMO) radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR) of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR), morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls.
Detection of Multiple Stationary Humans Using UWB MIMO Radar
Liang, Fulai; Qi, Fugui; An, Qiang; Lv, Hao; Chen, Fuming; Li, Zhao; Wang, Jianqi
2016-01-01
Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect), detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB) multiple-input and multiple-output (MIMO) radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR) of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR), morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls. PMID:27854356
The cam impinging femur has multiple morphologic abnormalities.
Ellis, Andrew R; Noble, Philip C; Schroder, Steven J; Thompson, Matthew T; Stocks, Gregory W
2011-09-01
This study was performed to establish whether the "cam" impinging femur has a single deformity of the head-neck junction or multiple abnormalities. Average dimensions (anteversion angle, α angle of Notzli, β angle of Beaulé, normalized anterior head offset) were compared between normal and impinging femora. The results demonstrated that impinging femora had wider necks, larger heads, and decreased head-neck ratios. There was no difference in neck-shaft angle or anteversion angle. Forty-six percent of impinging femora had significant posterior head displacement (>2mm), which averaged 1.93 mm for the cam impinging group, and 0.78 mm for the normal group. In conclusion, surgical treatment limited to localized recontouring of the head-neck profile may fail to address significant components of the underlying abnormality. Copyright © 2011 Elsevier Inc. All rights reserved.
Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Pang; Yu, Yue
2017-05-01
This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..
Monte Carlo calculation of large and small-angle electron scattering in air
NASA Astrophysics Data System (ADS)
Cohen, B. I.; Higginson, D. P.; Eng, C. D.; Farmer, W. A.; Friedman, A.; Grote, D. P.; Larson, D. J.
2017-11-01
A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. The algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.
10. Elevation view of south side of FrankJensen Summer Home. ...
10. Elevation view of south side of Frank-Jensen Summer Home. Note that the steep angle of view gives an illusion of a flat roof. For a more accurate depiction of the roof line, see photos WA-207-4 and WA-207-8. - Frank-Jensen Summer Home, 17423 North Lake Shore Drive, Telma, Chelan County, WA
Multi-Beam Approach for Accelerating Alignment and Calibration of HyspIRI-Like Imaging Spectrometers
NASA Technical Reports Server (NTRS)
Eastwood, Michael L.; Green, Robert O.; Mouroulis, Pantazis; Hochberg, Eric B.; Hein, Randall C.; Kroll, Linley A.; Geier, Sven; Coles, James B.; Meehan, Riley
2012-01-01
A paper describes an optical stimulus that produces more consistent results, and can be automated for unattended, routine generation of data analysis products needed by the integration and testing team assembling a high-fidelity imaging spectrometer system. One key attribute of the system is an arrangement of pick-off mirrors that provides multiple input beams (five in this implementation) to simultaneously provide stimulus light to several field angles along the field of view of the sensor under test, allowing one data set to contain all the information that previously required five data sets to be separately collected. This stimulus can also be fed by quickly reconfigured sources that ultimately provide three data set types that would previously be collected separately using three different setups: Spectral Response Function (SRF), Cross-track Response Function (CRF), and Along-track Response Function (ARF), respectively. This method also lends itself to expansion of the number of field points if less interpolation across the field of view is desirable. An absolute minimum of three is required at the beginning stages of imaging spectrometer alignment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A Rasmussen, Andrew P.; Hale, Layton; Kim, Peter
Meeting the science goals for the Large Synoptic Survey Telescope (LSST) translates into a demanding set of imaging performance requirements for the optical system over a wide (3.5{sup o}) field of view. In turn, meeting those imaging requirements necessitates maintaining precise control of the focal plane surface (10 {micro}m P-V) over the entire field of view (640 mm diameter) at the operating temperature (T {approx} -100 C) and over the operational elevation angle range. We briefly describe the hierarchical design approach for the LSST Camera focal plane and the baseline design for assembling the flat focal plane at room temperature.more » Preliminary results of gravity load and thermal distortion calculations are provided, and early metrological verification of candidate materials under cold thermal conditions are presented. A detailed, generalized method for stitching together sparse metrology data originating from differential, non-contact metrological data acquisition spanning multiple (non-continuous) sensor surfaces making up the focal plane, is described and demonstrated. Finally, we describe some in situ alignment verification alternatives, some of which may be integrated into the camera's focal plane.« less
NASA Astrophysics Data System (ADS)
Poudyal, R.; Singh, M. K.; Gatebe, C. K.; Gautam, R.; Varnai, T.
2015-12-01
Using airborne Cloud Absorption Radiometer (CAR) reflectance measurements of smoke, an empirical relationship between reflectances measured at different sun-satellite geometry is established, in this study. It is observed that reflectance of smoke aerosol at any viewing zenith angle can be computed using a linear combination of reflectance at two viewing zenith angles. One of them should be less than 30° and other must be greater than 60°. We found that the parameters of the linear combination computation follow a third order polynomial function of the viewing geometry. Similar relationships were also established for different relative azimuth angles. Reflectance at any azimuth angle can be written as a linear combination of measurements at two different azimuth angles. One must be in the forward scattering direction and the other in backward scattering, with both close to the principal plane. These relationships allowed us to create an Angular Distribution Model (ADM) for smoke, which can estimate reflectances in any direction based on measurements taken in four view directions. The model was tested by calculating the ADM parameters using CAR data from the SCAR-B campaign, and applying these parameters to different smoke cases at three spectral channels (340nm, 380nm and 470nm). We also tested our modelled smoke ADM formulas with Absorbing Aerosol Index (AAI) directly computed from the CAR data, based on 340nm and 380nm, which is probably the first study to analyze the complete multi-angular distribution of AAI for smoke aerosols. The RMSE (and mean error) of predicted reflectance for SCAR-B and ARCTAS smoke ADMs were found to be 0.002 (1.5%) and 0.047 (6%), respectively. The accuracy of the ADM formulation is also tested through radiative transfer simulations for a wide variety of situations (varying smoke loading, underlying surface types, etc.).
A three dimensional point cloud registration method based on rotation matrix eigenvalue
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui
2017-09-01
We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.
A statistical approach for generating synthetic tip stress data from limited CPT soundings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basalams, M.K.
CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less
Cooperative angle-only orbit initialization via fusion of admissible areas
NASA Astrophysics Data System (ADS)
Jia, Bin; Pham, Khanh; Blasch, Erik; Chen, Genshe; Shen, Dan; Wang, Zhonghai
2017-05-01
For the short-arc angle only orbit initialization problem, the admissible area is often used. However, the accuracy using a single sensor is often limited. For high value space objects, it is desired to achieve more accurate results. Fortunately, multiple sensors, which are dedicated to space situational awareness, are available. The work in this paper uses multiple sensors' information to cooperatively initialize the orbit based on the fusion of multiple admissible areas. Both the centralized fusion and decentralized fusion are discussed. Simulation results verify the expectation that the orbit initialization accuracy is improved by using information from multiple sensors.
2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...
2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA
Prediction of Viking lander camera image quality
NASA Technical Reports Server (NTRS)
Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.
1976-01-01
Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.
2017-09-12
NASA's Cassini spacecraft gazed toward the northern hemisphere of Saturn to spy subtle, multi-hued bands in the clouds there. This view looks toward the terminator -- the dividing line between night and day -- at lower left. The sun shines at low angles along this boundary, in places highlighting vertical structure in the clouds. Some vertical relief is apparent in this view, with higher clouds casting shadows over those at lower altitude. Images taken with the Cassini spacecraft narrow-angle camera using red, green and blue spectral filters were combined to create this natural-color view. The images were acquired on Aug. 31, 2017, at a distance of approximately 700,000 miles (1.1 million kilometers) from Saturn. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21888
Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam
2013-10-01
Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.
Rotary acceleration of a subject inhibits choice reaction time to motion in peripheral vision
NASA Technical Reports Server (NTRS)
Borkenhagen, J. M.
1974-01-01
Twelve pilots were tested in a rotation device with visual simulation, alone and in combination with rotary stimulation, in experiments with variable levels of acceleration and variable viewing angles, in a study of the effect of S's rotary acceleration on the choice reaction time for an accelerating target in peripheral vision. The pilots responded to the direction of the visual motion by moving a hand controller to the right or left. Visual-plus-rotary stimulation required a longer choice reaction time, which was inversely related to the level of acceleration and directly proportional to the viewing angle.
Effects of soil and canopy characteristics on microwave backscattering of vegetation
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Ranson, K. J.
1991-01-01
A frequency modulated continuous wave C-band (4.8 GHz) scatterometer was mounted on an aerial lift truck and backscatter coefficients of corn were acquired as functions of polarizations, view angles, and row directions. As phytomass and green leaf area index increased, the backscatter also increased. Near anthesis when the canopies were fully developed, the major scattering elements were located in the upper 1 m of the 2.8 m tall canopy and little backscatter was measured below that level. C-band backscatter data could provide information to monitor vegetation at large view zenith angles.
16. SOUTH TO VIEW OF CIRCA 1900 MICHIGAN MACHINERY MFG. ...
16. SOUTH TO VIEW OF CIRCA 1900 MICHIGAN MACHINERY MFG. CO. PUNCH PRESS WITH WOOD-BURNING HEATING STOVE LOCATED IN THE CENTER OF THE FACTORY BUILDING. BESIDE THE HEATING STOVE, POINTING TOWARD THE PUNCH PRESS, IS A JIG USED TO POSITION ANGLE STEEL COMPONENTS OF STEEL WINDMILL TOWER LEGS FOR PUNCHING BOLT HOLES. THE SUPPORT FOR THE BRICK FLUE OF THE HEATING STOVE IS CONSTRUCTED FROM SALVAGED GALVANIZED ANGLE STEEL OF THE TYPE USED IN FABRICATING WINDMILL TOWERS MANUFACTURED IN THE FACTORY. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE
A Summer View of Russia's Lena Delta and Olenek
NASA Technical Reports Server (NTRS)
2004-01-01
These views of the Russian Arctic were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on July 11, 2004, when the brief arctic summer had transformed the frozen tundra and the thousands of lakes, channels, and rivers of the Lena Delta into a fertile wetland, and when the usual blanket of thick snow had melted from the vast plains and taiga forests. This set of three images cover an area in the northern part of the Eastern Siberian Sakha Republic. The Olenek River wends northeast from the bottom of the images to the upper left, and the top portions of the images are dominated by the delta into which the mighty Lena River empties when it reaches the Laptev Sea. At left is a natural color image from MISR's nadir (vertical-viewing) camera, in which the rivers appear murky due to the presence of sediment, and photosynthetically-active vegetation appears green. The center image is also from MISR's nadir camera, but is a false color view in which the predominant red color is due to the brightness of vegetation at near-infrared wavelengths. The most photosynthetically active parts of this area are the Lena Delta, in the lower half of the image, and throughout the great stretch of land that curves across the Olenek River and extends northeast beyond the relatively barren ranges of the Volyoi mountains (the pale tan-colored area to the right of image center). The right-hand image is a multi-angle false-color view made from the red band data of the 60o backward, nadir, and 60o forward cameras, displayed as red, green and blue, respectively. Water appears blue in this image because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. Much of the landscape and many low clouds appear purple since these surfaces are both forward and backward scattering, and clouds that are further from the surface appear in a different spot for each view angle, creating a rainbow-like appearance. However, the vegetated region that is darker green in the natural color nadir image, also appears to exhibit a faint greenish hue in the multi-angle composite. A possible explanation for this subtle green effect is that the taiga forest trees (or dwarf-shrubs) are not too dense here. Since the the nadir camera is more likly to observe any gaps between the trees or shrubs, and since the vegetation is not as bright (in the red band) as the underlying soil or surface, the brighter underlying surface results in an area that is relatively brighter at the nadir view angle. Accurate maps of vegetation structural units are an essential part of understanding the seasonal exchanges of energy and water at the Earth's surface, and of preserving the biodiversity in these regions. The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 24273. The panels cover an area of about 230 kilometers x 420 kilometers, and utilize data from blocks 30 to 34 within World Reference System-2 path 134. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Impact Angle and Time Control Guidance Under Field-of-View Constraints and Maneuver Limits
NASA Astrophysics Data System (ADS)
Shim, Sang-Wook; Hong, Seong-Min; Moon, Gun-Hee; Tahk, Min-Jea
2018-04-01
This paper proposes a guidance law which considers the constraints of seeker field-of-view (FOV) as well as the requirements on impact angle and time. The proposed guidance law is designed for a constant speed missile against a stationary target. The guidance law consists of two terms of acceleration commands. The first one is to achieve zero-miss distance and the desired impact angle, while the second is to meet the desired impact time. To consider the limits of FOV and lateral maneuver capability, a varying-gain approach is applied on the second term. Reduction of realizable impact times due to these limits is then analyzed by finding the longest course among the feasible ones. The performance of the proposed guidance law is demonstrated by numerical simulation for various engagement conditions.
Analysis of the restricting factors of laser countermeasure active detection technology
NASA Astrophysics Data System (ADS)
Zhang, Yufa; Sun, Xiaoquan
2016-07-01
The detection effect of laser active detection system is affected by various kinds of factors. In view of the application requirement of laser active detection, the influence factors for laser active detection are analyzed. The mathematical model of cat eye target detection distance has been built, influence of the parameters of laser detection system and the environment on detection range and the detection efficiency are analyzed. Various parameters constraint detection performance is simulated. The results show that the discovery distance of laser active detection is affected by the laser divergence angle, the incident angle and the visibility of the atmosphere. For a given detection range, the laser divergence angle and the detection efficiency are mutually restricted. Therefore, in view of specific application environment, it is necessary to select appropriate laser detection parameters to achieve optimal detection effect.
2006-06-01
angle Imaging SpectroRadiometer MODIS Moderate Resolution Imaging Spectroradiometer NGA National Geospatial Intelligence Agency POI Principles of...and µ , the cosine of the viewing zenith angle and the effect of the variation of each of these variables on total optical depth. Extraterrestrial ...Eq. (34). Additionally, solar zenith angle also plays a role in the third term on the RHS of Eq. (34) by modifying extraterrestrial spectral solar
Prospects for altimetry and scatterometry in the 90's. [satellite oceanography
NASA Technical Reports Server (NTRS)
Townsend, W. F.
1985-01-01
Current NASA plans for altimetry and scatterometry of the oceans using spaceborne instrumentation are outlined. The data of interest covers geostrophic and wind-driven circulation, heat content, the horizontal heat flux of the ocean, and the interactions between atmosphere and ocean and ocean and climate. A proposed TOPEX satellite is to be launched in 1991, carrying a radar altimeter to measure the ocean surface topography. Employing dual-wavelength operation would furnish ionospheric correction data. Multibeam instruments could also be flown on the multiple-instrument polar orbiting platforms comprising the Earth Observation System. A microwave radar scatterometer, which functions on the basis of Bragg scattering of microwave energy off of wavelets, would operate at various view angles and furnish wind speeds accurate to 1.5 m/sec and directions accurate to 20 deg.
Southern California Wildfires Observed by NASA MISR
2016-06-24
The Los Angeles area is currently suffering the effects of three major wildfires that are blanketing the area with smoke. Over the past few days, Southern California has experienced record-breaking temperatures, topping 110 degrees Fahrenheit in some cities. The heat, in combination with offshore winds, helped to stoke the Sherpa Fire west of Santa Barbara, which has been burning since June 15, 2016. Over the weekend of June 18-19, this fire rapidly expanded in size, forcing freeway closures and evacuations of campgrounds and state beaches. On Monday, June 20, two new fires ignited in the San Gabriel Mountains north of Azusa and Duarte, together dubbed the San Gabriel Complex Fire. They have burned more than 4,900 acres since June 20, sending up plumes of smoke visible to many in the Los Angeles basin and triggering air quality warnings. More than 1,400 personnel have been battling the blazes in the scorching heat, and evacuations were ordered for neighborhoods in the foothills. On June 21, the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite captured this view of the San Gabriel Mountains and Los Angeles Basin from its 46-degree forward-viewing camera, which enhances the visibility of the smoke compared to the more conventional nadir (vertical) view. The width of this image is about 75 miles (120 kilometers) across. Smoke from the San Gabriel Complex Fire is visible at the very right of the image. Stereoscopic analysis of MISR's multiple camera angles is used to compute the height of the smoke plume from the San Gabriel Complex Fire. In the right-hand image, these heights are superimposed on the underlying image. The color scale shows that the plume is not much higher than the surrounding mountains. As a result, much of the smoke is confined to the local area. http://photojournal.jpl.nasa.gov/catalog/PIA20718
Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo
2018-01-01
In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar. PMID:29518957
New ways in creating pixelgram images
NASA Astrophysics Data System (ADS)
Malureanu, Radu; Di Fabrizio, Enzo
2006-09-01
Since the diffraction gratings were invented, their use in various security systems has been exploited. Their big advantage is the low production cost and, in the same time, the difficulty of replicating them. Most of the nowadays security systems are using those gratings to prove their originality. They can be seen on all the CDs, DVDs, most of the major credit cards and even on the wine bottles. In this article we present a new way of making such gratings without changing the production steps but generating an even more difficult to be replicated item. This new way consists not only in changing the grating period so that various false colours can be seen, but also their orientation so that for a complete check of the grating it should be seen under a certain solid angle. In the same time, one can also keep the possibility to change the grating period so this way various colours can be seen for each angle variation. By combining these two techniques (changing period and changing the angle ones) one can indeed create different images for each view angle and thus increasing the security of the object. In the same time, as can be seen, from the fabrication point of view no further complications appear. The production steps are identical, the only difference being the pattern. The resolution of the grating is not increased necessarily so neither from this point of view will complications appear.
Ren, Huazhong; Yan, Guangjian; Liu, Rongyuan; Li, Zhao-Liang; Qin, Qiming; Nerry, Françoise; Liu, Qiang
2015-03-27
Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST) retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF) is first extended to the thermal infrared (TIR) domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir) from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors.
Ren, Huazhong; Yan, Guangjian; Liu, Rongyuan; Li, Zhao-Liang; Qin, Qiming; Nerry, Françoise; Liu, Qiang
2015-01-01
Multi-angular observation of land surface thermal radiation is considered to be a promising method of performing the angular normalization of land surface temperature (LST) retrieved from remote sensing data. This paper focuses on an investigation of the minimum requirements of viewing angles to perform such normalizations on LST. The normally kernel-driven bi-directional reflectance distribution function (BRDF) is first extended to the thermal infrared (TIR) domain as TIR-BRDF model, and its uncertainty is shown to be less than 0.3 K when used to fit the hemispheric directional thermal radiation. A local optimum three-angle combination is found and verified using the TIR-BRDF model based on two patterns: the single-point pattern and the linear-array pattern. The TIR-BRDF is applied to an airborne multi-angular dataset to retrieve LST at nadir (Te-nadir) from different viewing directions, and the results show that this model can obtain reliable Te-nadir from 3 to 4 directional observations with large angle intervals, thus corresponding to large temperature angular variations. The Te-nadir is generally larger than temperature of the slant direction, with a difference of approximately 0.5~2.0 K for vegetated pixels and up to several Kelvins for non-vegetated pixels. The findings of this paper will facilitate the future development of multi-angular thermal infrared sensors. PMID:25825975
General view of the flight deck of the Orbiter Discovery ...
General view of the flight deck of the Orbiter Discovery looking from a low angle up and aft from approximately behind the commander's station. In the view you can see the overhead aft observation windows, the payload operations work area and in this view the payload bay observation windows have protective covers on them. This view was taken at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan
Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.
MISR at 15: Multiple Perspectives on Our Changing Earth
NASA Astrophysics Data System (ADS)
Diner, D. J.; Ackerman, T. P.; Braverman, A. J.; Bruegge, C. J.; Chopping, M. J.; Clothiaux, E. E.; Davies, R.; Di Girolamo, L.; Garay, M. J.; Jovanovic, V. M.; Kahn, R. A.; Kalashnikova, O.; Knyazikhin, Y.; Liu, Y.; Marchand, R.; Martonchik, J. V.; Muller, J. P.; Nolin, A. W.; Pinty, B.; Verstraete, M. M.; Wu, D. L.
2014-12-01
Launched aboard NASA's Terra satellite in December 1999, the Multi-angle Imaging SpectroRadiometer (MISR) instrument has opened new vistas in remote sensing of our home planet. Its 9 pushbroom cameras provide as many view angles ranging from 70 degrees forward to 70 degrees backward along Terra's flight track, in four visible and near-infrared spectral bands. MISR's well-calibrated, accurately co-registered, and moderately high spatial resolution radiance images have been coupled with novel data processing algorithms to mine the information content of angular reflectance anisotropy and multi-camera stereophotogrammetry, enabling new perspectives on the 3-D structure and dynamics of Earth's atmosphere and surface in support of climate and environmental research. Beginning with "first light" in February 2000, the nearly 15-year (and counting) MISR observational record provides an unprecedented data set with applications to multiple disciplines, documenting regional, global, short-term, and long-term changes in aerosol optical depths, aerosol type, near-surface particulate pollution, spectral top-of-atmosphere and surface albedos, aerosol plume-top and cloud-top heights, height-resolved cloud fractions, atmospheric motion vectors, and the structure of vegetated and ice-covered terrains. Recent computational advances include aerosol retrievals at finer spatial resolution than previously possible, and production of near-real time tropospheric winds with a latency of less than 3 hours, making possible for the first time the assimilation of MISR data into weather forecast models. In addition, recent algorithmic and technological developments provide the means of using and acquiring multi-angular data in new ways, such as the application of optical tomography to map 3-D atmospheric structure; building smaller multi-angle instruments in the future; and extending the multi-angular imaging methodology to the ultraviolet, shortwave infrared, and polarimetric realms. Such advances promise further enhancements to the observational power of the remote sensing approaches that MISR has pioneered.
Monte Carlo calculation of large and small-angle electron scattering in air
Cohen, B. I.; Higginson, D. P.; Eng, C. D.; ...
2017-08-12
A Monte Carlo method for angle scattering of electrons in air that accommodates the small-angle multiple scattering and larger-angle single scattering limits is introduced. In this work, the algorithm is designed for use in a particle-in-cell simulation of electron transport and electromagnetic wave effects in air. The method is illustrated in example calculations.
Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung
2017-02-01
A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.
Optical Polarization of Light from a Sorghum Canopy Measured Under Both a Clear and an Overcast Sky
NASA Technical Reports Server (NTRS)
Vanderbilt, Vern; Daughtry, Craig; Biehl, Larry; Dahlgren, Robert
2014-01-01
Introduction: We tested the hypothesis that the optical polarization of the light reflected by a sorghum canopy is due to a Fresnel-type redirection, by sorghum leaf surfaces, of light from an unpolarized light source, the sun or overcast sky, toward the measuring sensor. If it can be shown that the source of the polarization of the light scattered by the sorghum canopy is a first surface, Fresnel-type reflection, then removing this surface reflected light from measurements of canopy reflectance presumably would allow better insight into the biochemical processes such as photosynthesis and metabolism that occur in the interiors of sorghum canopy leaves. Methods: We constructed a tower 5.9m tall in the center of a homogenous sorghum field. We equipped two Barnes MMR radiometers with polarization analyzers on the number 1, 3 and 7 Landsat TM wavelength bands. Positioning the radiometers atop the tower, we collected radiance data in 44 view directions on two days, one day with an overcast sky and the other, clear and sunlit. From the radiance data we calculated the linear polarization of the reflected light for each radiometer wavelength channel and view direction. Results and Discussion: Our experimental results support our hypothesis, showing that the amplitude of the linearly polarized portion of the light reflected by the sorghum canopy varied dramatically with view azimuth direction under a point source, the sun, but the amplitude varied little with view azimuth direction under the hemispherical source, the overcast sky. Under the clear sky, the angle of polarization depended upon the angle of incidence of the sunlight on the leaf, while under the overcast sky the angle of polarization depended upon the zenith view angle. These results support a polarized radiation transport model of the canopy that is based upon a first surface, Fresnel reflection from leaves in the sorghum canopy.
126. AERIAL FORWARD VIEW OF ENCLOSED HURRICANE BOW WITH FLIGHT ...
126. AERIAL FORWARD VIEW OF ENCLOSED HURRICANE BOW WITH FLIGHT DECK GUN MOUNTS REMOVED AND ANGLED FLIGHT DECK. 1 OCTOBER 1956. (NATIONAL ARCHIVES NO. 80-G-1001445) - U.S.S. HORNET, Puget Sound Naval Shipyard, Sinclair Inlet, Bremerton, Kitsap County, WA
10. View northwest Typical panel detail (south chord) of variable ...
10. View northwest Typical panel detail (south chord) of variable section girder showing riveted connections, angle stiffeners for girder web, and nuts securing wind bracing rods. - Walpole-Westminster Bridge, Spanning Connecticut River between Walpole, NH & Westminster, VT, Walpole, Cheshire County, NH
A rocket-borne energy spectrometer using multiple solid-state detectors for particle identification
NASA Technical Reports Server (NTRS)
Fries, K. L.; Smith, L. G.; Voss, H. D.
1979-01-01
A rocket-borne experiment using energy spectrometers that allows particle identification by the use of multiple solid-state detectors is described. The instrumentation provides information regarding the energy spectrum, pitch-angle distribution, and the type of energetic particles present in the ionosphere. Particle identification was accomplished by considering detector loss mechanisms and their effects on various types of particles. Solid state detectors with gold and aluminum surfaces of several thicknesses were used. The ratios of measured energies for the various detectors were compared against known relationships during ground-based analysis. Pitch-angle information was obtained by using detectors with small geometrical factors mounted with several look angles. Particle flux was recorded as a function of rocket azimuth angle. By considering the rocket azimuth, the rocket precession, and the location of the detectors on the rocket, the pitched angle of the incident particles was derived.
Optic for industrial endoscope/borescope with narrow field of view and low distortion
Stone, Gary F.; Trebes, James E.
2005-08-16
An optic for the imaging optics on the distal end of a flexible fiberoptic endoscope or rigid borescope inspection tool. The image coverage is over a narrow (<20 degrees) field of view with very low optical distortion (<5% pin cushion or barrel distortion), compared to the typical <20% distortion. The optic will permit non-contact surface roughness measurements using optical techniques. This optic will permit simultaneous collection of selected image plane data, which data can then be subsequently optically processed. The image analysis will yield non-contact surface topology data for inspection where access to the surface does not permit a mechanical styles profilometer verification of surface topology. The optic allows a very broad spectral band or range of optical inspection. It is capable of spectroscopic imaging and fluorescence induced imaging when a scanning illumination source is used. The total viewing angle for this optic is 10 degrees for the full field of view of 10 degrees, compared to 40-70 degrees full angle field of view of the conventional gradient index or GRIN's lens systems.
2016-11-21
Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508
2017-11-27
These two images illustrate just how far Cassini traveled to get to Saturn. On the left is one of the earliest images Cassini took of the ringed planet, captured during the long voyage from the inner solar system. On the right is one of Cassini's final images of Saturn, showing the site where the spacecraft would enter the atmosphere on the following day. In the left image, taken in 2001, about six months after the spacecraft passed Jupiter for a gravity assist flyby, the best view of Saturn using the spacecraft's high-resolution (narrow-angle) camera was on the order of what could be seen using the Earth-orbiting Hubble Space Telescope. At the end of the mission (at right), from close to Saturn, even the lower resolution (wide-angle) camera could capture just a tiny part of the planet. The left image looks toward Saturn from 20 degrees below the ring plane and was taken on July 13, 2001 in wavelengths of infrared light centered at 727 nanometers using the Cassini spacecraft narrow-angle camera. The view at right is centered on a point 6 degrees north of the equator and was taken in visible light using the wide-angle camera on Sept. 14, 2017. The view on the left was acquired at a distance of approximately 317 million miles (510 million kilometers) from Saturn. Image scale is about 1,900 miles (3,100 kilometers) per pixel. The view at right was acquired at a distance of approximately 360,000 miles (579,000 kilometers) from Saturn. Image scale is 22 miles (35 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21353
NASA Astrophysics Data System (ADS)
Koskelo, Elise Anne C.; Flynn, Eric B.
2017-02-01
Inspection of and around joints, beams, and other three-dimensional structures is integral to practical nondestructive evaluation of large structures. Non-contact, scanning laser ultrasound techniques offer an automated means of physically accessing these regions. However, to realize the benefits of laser-scanning techniques, simultaneous inspection of multiple surfaces at different orientations to the scanner must not significantly degrade the signal level nor diminish the ability to distinguish defects from healthy geometric features. In this study, we evaluated the implementation of acoustic wavenumber spectroscopy for inspecting metal joints and crossbeams from interior angles. With this technique, we used a single-tone, steady-state, ultrasonic excitation to excite the joints via a single transducer attached to one surface. We then measured the full-field velocity responses using a scanning Laser Doppler vibrometer and produced maps of local wavenumber estimates. With the high signal level associated with steady-state excitation, scans could be performed at surface orientations of up to 45 degrees. We applied camera perspective projection transformations to remove the distortion in the scans due to a known projection angle, leading to a significant improvement in the local estimates of wavenumber. Projection leads to asymmetrical distortion in the wavenumber in one direction, making it possible to estimate view angle even when neither it nor the nominal wavenumber is known. Since plate thinning produces a purely symmetric increase in wavenumber, it also possible to independently estimate the degree of hidden corrosion. With a two-surface joint, using the wavenumber estimate maps, we were able to automatically calculate the orthographic projection component of each angled surface in the scan area.
The utility of multiple synthesized views in the recognition of unfamiliar faces.
Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B
2017-05-01
The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.
Ash from Kilauea Eruption Viewed by NASA's MISR
2018-05-09
On May 3, 2018, a new eruption began at a fissure of the Kilauea volcano on the Island of Hawaii. Kilauea is the most active volcano in the world, having erupted almost continuously since 1983. Advancing lava and dangerous sulfur dioxide gas have forced thousands of residents in the neighborhood of Leilani Estates to evacuate. A number of homes have been destroyed, and no one can say how soon the eruption will abate and evacuees can return home. On May 6, 2018, at approximately 11 a.m. local time, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite captured this view of the island as it passed overhead. Much of the island was shrouded by clouds, including the fissure on its eastern point. However, an eruption plume is visible streaming southwest over the ocean. The MISR instrument is unique in that it has nine cameras that view Earth at different angles: one pointing downward, four at various angles in the forward direction, and four in the backward direction. This image shows the view from one of MISR's forward-pointing cameras (60 degrees), which shows the plume more distinctly than the near-vertical views. The information from the images acquired at different view angles is used to calculate the height of the plume, results of which are superimposed on the right-hand image. The top of the plume near the fissure is at approximately 6,500 feet (2,000 meters) altitude, and the height of the plume decreases as it travels south and west. These relatively low altitudes mean that the ash and sulfur dioxide remained near the ground, which can cause health issues for people on the island downwind of the eruption. The "Ocean View" air quality monitor operated by the Clean Air Branch of the State of Hawaii Department of Health recorded a concentration of 18 μg/m3 of airborne particles less than 2.5 micrometers in diameter at 11 a.m. local time. This amount corresponds to an air quality rating of "moderate" and supports the MISR results indicating that ash was most likely present at ground level on this side of the island. These data were acquired during Terra orbit 97780. An annotated version is available at https://photojournal.jpl.nasa.gov/catalog/PIA22451
A Description of a Family of Heron Quadrilaterals
ERIC Educational Resources Information Center
Sastry, K. R. S.
2005-01-01
Mathematical historians place Heron in the first century. Right-angled triangles with integer sides and area had been determined before Heron, but he discovered such a "non" right-angled triangle, viz 13, 14, 15; 84. In view of this, triangles with integer sides and area are named "Heron triangles." The Indian mathematician Brahmagupta, born in…
2017-08-11
NASA's Cassini spacecraft looks toward the night side of Saturn's moon Titan in a view that highlights the extended, hazy nature of the moon's atmosphere. During its long mission at Saturn, Cassini has frequently observed Titan at viewing angles like this, where the atmosphere is backlit by the Sun, in order to make visible the structure of the hazes. Titan's high-altitude haze layer appears blue here, whereas the main atmospheric haze is orange. The difference in color could be due to particle sizes in the haze. The blue haze likely consists of smaller particles than the orange haze. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The image was taken with the Cassini spacecraft narrow-angle camera on May 29, 2017. The view was acquired at a distance of approximately 1.2 million miles (2 million kilometers) from Titan. Image scale is 5 miles (9 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21625
Dual-view-zone tabletop 3D display system based on integral imaging.
He, Min-Yang; Zhang, Han-Le; Deng, Huan; Li, Xiao-Wei; Li, Da-Hai; Wang, Qiong-Hua
2018-02-01
In this paper, we propose a dual-view-zone tabletop 3D display system based on integral imaging by using a multiplexed holographic optical element (MHOE) that has the optical properties of two sets of microlens arrays. The MHOE is recorded by a reference beam using the single-exposure method. The reference beam records the wavefronts of a microlens array from two different directions. Thus, when the display beam is projected on the MHOE, two wavefronts with the different directions will be rebuilt and the 3D virtual images can be reconstructed in two viewing zones. The MHOE has angle and wavelength selectivity. Under the conditions of the matched wavelength and the angle of the display beam, the diffraction efficiency of the MHOE is greatest. Because the unmatched light just passes through the MHOE, the MHOE has the advantage of a see-through display. The experimental results confirm the feasibility of the dual-view-zone tabletop 3D display system.
Wei, Yingying; An, Qinglong; Cai, Xiaojiang; Chen, Ming; Ming, Weiwei
2015-10-02
The purpose of this article is to investigate the influences of carbon fibers on the fracture mechanism of carbon fibers both in macroscopic view and microscopic view by using single-point flying cutting method. Cutting tools with three different materials were used in this research, namely, PCD (polycrystalline diamond) tool, CVD (chemical vapor deposition) diamond thin film coated carbide tool and uncoated carbide tool. The influence of fiber orientation on the cutting force and fracture topography were analyzed and conclusions were drawn that cutting forces are not affected by cutting speeds but significantly influenced by the fiber orientation. Cutting forces presented smaller values in the fiber orientation of 0/180° and 15/165° but the highest one in 30/150°. The fracture mechanism of carbon fibers was studied in different cutting conditions such as 0° orientation angle, 90° orientation angle, orientation angles along fiber direction, and orientation angles inverse to the fiber direction. In addition, a prediction model on the cutting defects of carbon fiber reinforced plastic was established based on acoustic emission (AE) signals.
Wei, Yingying; An, Qinglong; Cai, Xiaojiang; Chen, Ming; Ming, Weiwei
2015-01-01
The purpose of this article is to investigate the influences of carbon fibers on the fracture mechanism of carbon fibers both in macroscopic view and microscopic view by using single-point flying cutting method. Cutting tools with three different materials were used in this research, namely, PCD (polycrystalline diamond) tool, CVD (chemical vapor deposition) diamond thin film coated carbide tool and uncoated carbide tool. The influence of fiber orientation on the cutting force and fracture topography were analyzed and conclusions were drawn that cutting forces are not affected by cutting speeds but significantly influenced by the fiber orientation. Cutting forces presented smaller values in the fiber orientation of 0/180° and 15/165° but the highest one in 30/150°. The fracture mechanism of carbon fibers was studied in different cutting conditions such as 0° orientation angle, 90° orientation angle, orientation angles along fiber direction, and orientation angles inverse to the fiber direction. In addition, a prediction model on the cutting defects of carbon fiber reinforced plastic was established based on acoustic emission (AE) signals. PMID:28793597
Spinning angle optical calibration apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, S.K.; Pratt, H.R.
1991-02-26
This patent describes an optical calibration apparatus provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting an accurate reproducing of spinning magic angles in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the magic angle of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to amore » graduation or graduations on a reticle in the magnifying scope is noted.« less
NASA Astrophysics Data System (ADS)
Melnikova, I.; Mukai, S.; Vasilyev, A.
Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic formula inversion for optically thick stratus clouds. The model of horizontally infinite layer is considered. The slight horizontal heterogeneity is approximately taken into account. Formulas containing only the measured values of two-direction radiance and functions of solar and view angles were derived earlier. The 6 azimuth harmonics of reflection function are taken into account. The simple approximation of the cloud top boarder heterogeneity is used. The clouds, projecting upper the cloud top plane causes the increase of diffuse radiation in the incident flux. It is essential for calculation of radiative characteristics, which depends on lighting conditions. Escape and reflection functions describe this dependence for reflected radiance and local albedo of semi-infinite medium - for irradiance. Thus the functions depending on solar incident angle is to replace by their modifications. Firstly optical thickness of every pixel is obtained with simple formula assuming conservative scattering for all available view directions. Deviations between obtained values may be taken as a measure of the cloud top deviation from the plane. The special parameter is obtained, which takes into account the shadowing effect. Then single scattering albedo and optical thickness (with the true absorption assuming) are obtained for pairs of view directions with equal optical thickness. After that the averaging of values obtained and relative error evaluation is accomplished for all viewing directions of every pixel. The procedure is repeated for all wavelengths and pixels independently.
Optical parameters of TN display with dichroic dye
NASA Astrophysics Data System (ADS)
Olifierczuk, Marek; Zielinski, Jerzy; Perkowski, Pawel
2000-05-01
The present work contain the studies on optical parameters (contrast ratio, viewing angle, birefringence and brightness) of twisted nematic display with black dichroic dye which is designed for an application in large-area information and advertising systems. The numerical optimization of display with a dye has been done. The absorption characteristic of the dye has been obtained. Birefringence of doped mixtures (Delta) n has been measured. The contrast ratio of doped mixtures has been measured in wide temperature range from -25 degree(s)C to +70 degree(s)C. The angle characteristics of contrast ratio for +20 degree(s)C have been obtained. In the work the detailed results describing the effect of a dye on temperature dependence of birefringence and contrast ratio, moreover, the effect of dye on the viewing angle for the first and second transmission minimum will be presented. Additionally, the dielectric characteristics of different mixtures will be shown.
A wide-angle camera module for disposable endoscopy
NASA Astrophysics Data System (ADS)
Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee
2016-08-01
A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomassen, K I
The SSPX Thermistor is a glass encapsulated bead thermistor made by Thermometrics, a BR 14 P A 103 J. The BR means ruggedized bead structure, 14 is the nominal bead diameter in mils, P refers to opposite end leads, A is the material system code letter, 103 refers to its 10 k{Omega} zero-power resistance at 25 C, and the tolerance letter J indicates {+-} 5% at 25 C. It is football shaped, with height ->, and is viewed through a slot of height h = 0.01 inches. The slot is perpendicular to the long axis of the bead, and ismore » a distance s {approx} 0.775 cm in front of the thermistor. So plasma is viewed over a large angle along the slot, but over a small angle {alpha} perpendicular to the slot. The angle {alpha} is given by 2s tan{alpha} = -> + h.« less
Maneuver Algorithm for Bearings-Only Target Tracking with Acceleration and Field of View Constraints
NASA Astrophysics Data System (ADS)
Roh, Heekun; Shim, Sang-Wook; Tahk, Min-Jea
2018-05-01
This paper proposes a maneuver algorithm for the agent performing target tracking with bearing angle information only. The goal of the agent is to estimate the target position and velocity based only on the bearing angle data. The methods of bearings-only target state estimation are outlined. The nature of bearings-only target tracking problem is then addressed. Based on the insight from above-mentioned properties, the maneuver algorithm for the agent is suggested. The proposed algorithm is composed of a nonlinear, hysteresis guidance law and the estimation accuracy assessment criteria based on the theory of Cramer-Rao bound. The proposed guidance law generates lateral acceleration command based on current field of view angle. The accuracy criteria supply the expected estimation variance, which acts as a terminal criterion for the proposed algorithm. The aforementioned algorithm is verified with a two-dimensional simulation.
Photometric models of disk-integrated observations of the OSIRIS-REx target Asteroid (101955) Bennu
NASA Astrophysics Data System (ADS)
Takir, Driss; Clark, Beth Ellen; Drouet d'Aubigny, Christian; Hergenrother, Carl W.; Li, Jian-Yang; Lauretta, Dante S.; Binzel, Richard P.
2015-05-01
We used ground-based photometric phase curve data of the OSIRIS-REx target Asteroid (101955) Bennu and low phase angle data from Asteroid (253) Mathilde as a proxy to fit Bennu data with Minnaert, Lommel-Seeliger, (RObotic Lunar Orbiter) ROLO, Hapke, and McEwen photometric models, which capture the global light scattering properties of the surface and subsequently allow us to calculate the geometric albedo, phase integral, spherical Bond albedo, and the average surface normal albedo for Bennu. We find that Bennu has low reflectance and geometric albedo values, such that multiple scattering is expected to be insignificant. Our photometric models relate the reflectance from Bennu's surface to viewing geometry as functions of the incidence, emission, and phase angles. Radiance Factor functions (RADFs) are used to model the disk-resolved brightness of Bennu. The Minnaert, Lommel-Seeliger, ROLO, and Hapke photometric models work equally well in fitting the best ground-based photometric phase curve data of Bennu. The McEwen model works reasonably well at phase angles from 20° to 70°. Our calculated geometric albedo values of 0.047-0.014+0.012,0.047-0.014+0.005 , and 0.048-0.022+0.012 for the Minnaert, the Lommel-Seeliger, and the ROLO models respectively are consistent with the geometric albedo of 0.045 ± 0.015 computed by Emery et al. (Emery, J.P. et al. [2014]. Icarus 234, 17-35) and Hergenrother et al. (Hergenrother, C.W. et al. [2014].
NASA Technical Reports Server (NTRS)
2002-01-01
These views of Hurricane Isidore were acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on September 20, 2002. After bringing large-scale flooding to western Cuba, Isidore was upgraded (on September 21) from a tropical storm to a category 3hurricane. Sweeping westward to Mexico's Yucatan Peninsula, the hurricane caused major destruction and left hundreds of thousands of people homeless. Although weakened after passing over the Yucatan landmass, Isidore regained strength as it moved northward over the Gulf of Mexico.
At left is a colorful visualization of cloud extent that superimposes MISR's radiometric camera-by-camera cloud mask (RCCM) over natural-color radiance imagery, both derived from data acquired with the instrument's vertical-viewing (nadir) camera. Using brightness and statistical metrics, the RCCM is one of several techniques MISR uses to determine whether an area is clear or cloudy. In this rendition, the RCCM has been color-coded, and purple = cloudy with high confidence, blue = cloudy with low confidence, green = clear with low confidence, and red = clear with high confidence.In addition to providing information on meteorological events, MISR's data products are designed to help improve our understanding of the influences of clouds on climate. Cloud heights and albedos are among the variables that govern these influences. (Albedo is the amount of sunlight reflected back to space divided by the amount of incident sunlight.) The center panel is the cloud-top height field retrieved using automated stereoscopic processing of data from multiple MISR cameras. Areas where heights could not be retrieved are shown in dark gray. In some areas, such as the southern portion of the image, the stereo retrieval was able to detect thin, high clouds that were not picked up by the RCCM's nadir view. Retrieved local albedo values for Isidore are shown at right. Generation of the albedo product is dependent upon observed cloud radiances as a function of viewing angle as well as the height field. Note that over the short distances (2.2 kilometers) that the local albedo product is generated, values can be greater than 1.0 due to contributions from cloud sides. Areas where albedo could not be retrieved are shown in dark gray.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 14669. The panels cover an area of about 380 kilometers x 704 kilometers, and utilize data from blocks 70 to 79within World Reference System-2 path 17.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Objective for monitoring the corona discharge
NASA Astrophysics Data System (ADS)
Obrezkov, Andrey; Rodionov, Andrey Yu.; Pisarev, Viktor N.; Chivanov, Alexsey N.; Baranov, Yuri P.; Korotaev, Valery V.
2016-04-01
Remote optoelectronic probing is one of the most actual aspects of overhead electric line maintenances. By installing such systems on a helicopter (for example) it becomes possible to monitor overhead transmission line status and to search damaged parts of the lines. Thermal and UV-cameras are used for more effective diagnostic. UV-systems are fitted with filters, that attenuate visible spectrum, which is an undesired type of signal. Also these systems have a wide view angle for better view and proper diagnostics. For even more effectiveness, it is better to use several spectral channels: like UV and IR. Such spectral selection provides good noise reduction. Experimental results of spectral parameters of the wide view angle multispectral objective for such systems are provided in this report. There is also data on point spread function, UV and IR scattering index data and technical requirements for detectors.
NASA Astrophysics Data System (ADS)
Chen, D. M.; Clapp, R. G.; Biondi, B.
2006-12-01
Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe can be loaded into an active Ricksep session, after which the user can navigate to any state in the sequence and modify the sequence from that state. Typical uses of this feature are undoing and redoing viewing commands and animating a sequence of viewing states. The theoretical discussion are carried out and several examples using real seismic data are provided to show how these new Ricksep features provide more convenient, accurate ways to manipulate multi-dimensional data sets.
Using Lunar Module Shadows To Scale the Effects of Rocket Exhaust Plumes
NASA Technical Reports Server (NTRS)
2008-01-01
Excavating granular materials beneath a vertical jet of gas involves several physical mechanisms. These occur, for example, beneath the exhaust plume of a rocket landing on the soil of the Moon or Mars. We performed a series of experiments and simulations (Figure 1) to provide a detailed view of the complex gas-soil interactions. Measurements taken from the Apollo lunar landing videos (Figure 2) and from photographs of the resulting terrain helped demonstrate how the interactions extrapolate into the lunar environment. It is important to understand these processes at a fundamental level to support the ongoing design of higher fidelity numerical simulations and larger-scale experiments. These are needed to enable future lunar exploration wherein multiple hardware assets will be placed on the Moon within short distances of one another. The high-velocity spray of soil from the landing spacecraft must be accurately predicted and controlled or it could erode the surfaces of nearby hardware. This analysis indicated that the lunar dust is ejected at an angle of less than 3 degrees above the surface, the results of which can be mitigated by a modest berm of lunar soil. These results assume that future lunar landers will use a single engine. The analysis would need to be adjusted for a multiengine lander. Figure 3 is a detailed schematic of the Lunar Module camera calibration math model. In this chart, formulas relating the known quantities, such as sun angle and Lunar Module dimensions, to the unknown quantities are depicted. The camera angle PSI is determined by measurement of the imaged aspect ratio of a crater, where the crater is assumed to be circular. The final solution is the determination of the camera calibration factor, alpha. Figure 4 is a detailed schematic of the dust angle math model, which again relates known to unknown parameters. The known parameters now include the camera calibration factor and Lunar Module dimensions. The final computation is the ejected dust angle, as a function of Lunar Module altitude.
Height and Motion of the Chikurachki Eruption Plume
NASA Technical Reports Server (NTRS)
2003-01-01
The height and motion of the ash and gas plume from the April 22, 2003, eruption of the Chikurachki volcano is portrayed in these views from the Multi-angle Imaging SpectroRadiometer (MISR). Situated within the northern portion of the volcanically active Kuril Island group, the Chikurachki volcano is an active stratovolcano on Russia's Paramushir Island (just south of the Kamchatka Peninsula).In the upper panel of the still image pair, this scene is displayed as a natural-color view from MISR's vertical-viewing (nadir) camera. The white and brownish-grey plume streaks several hundred kilometers from the eastern edge of Paramushir Island toward the southeast. The darker areas of the plume typically indicate volcanic ash, while the white portions of the plume indicate entrained water droplets and ice. According to the Kamchatkan Volcanic Eruptions Response Team (KVERT), the temperature of the plume near the volcano on April 22 was -12o C.The lower panel shows heights derived from automated stereoscopic processing of MISR's multi-angle imagery, in which the plume is determined to reach heights of about 2.5 kilometers above sea level. Heights for clouds above and below the eruption plume were also retrieved, including the high-altitude cirrus clouds in the lower left (orange pixels). The distinctive patterns of these features provide sufficient spatial contrast for MISR's stereo height retrieval to perform automated feature matching between the images acquired at different view angles. Places where clouds or other factors precluded a height retrieval are shown in dark gray.The multi-angle 'fly-over' animation (below) allows the motion of the plume and of the surrounding clouds to be directly observed. The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with the view from the 70-degree backward camera.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17776. The panels cover an area of approximately 296 kilometers x 216 kilometers (still images) and 185 kilometers x 154 kilometers (animation), and utilize data from blocks 50 to 51 within World Reference System-2 path 100.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology. [figure removed for brevity, see original siteContinuous zoom antenna for mobile visible light communication.
Zhang, Xuebin; Tang, Yi; Cui, Lu; Bai, Tingzhu
2015-11-10
In this paper, we design a continuous zoom antenna for mobile visible light communication (VLC). In the design, a right-angle reflecting prism was adopted to fold the space optical path, thus decreasing the antenna thickness. The surface of each lens in the antenna is spherical, and the system cost is relatively low. Simulation results indicated that the designed system achieved the following performance: zoom ratio of 2.44, field of view (FOV) range of 18°-48°, system gain of 16.8, and system size of 18 mm×6 mm. Finally, we established an indoor VLC system model in a room the size of 5 m ×5 m ×3 m and compared the detection results of the zoom antenna and fixed-focus antenna obtained in a multisource communication environment, a mobile VLC environment, and a multiple-input multiple-output communication environment. The simulation results indicated that the continuous zoom antenna could realize large FOV and high gain. Moreover, the system showed improved stability, mobility, and environmental applicability.
Unveiling the nature of the $$\\gamma$$-ray emitting active galactic nucleus PKS 0521-36
D'Ammando, F.; Orienti, M.; Tavecchio, F.; ...
2015-05-19
PKS 0521-36 is an active galactic nucleus (AGN) with uncertain classification. Here, we investigate the properties of this source from radio to γ-rays. The broad emission lines in the optical and ultraviolet bands and steep radio spectrum indicate a possible classification as an intermediate object between broad-line radio galaxies (BLRG) and steep spectrum radio quasars (SSRQ). On pc-scales PKS 0521-36 shows a knotty structure similar to misaligned AGN. The core dominance and the γ-ray properties are similar to those estimated for other SSRQ and BLRG detected in γ-rays, suggesting an intermediate viewing angle with respect to the observer. In thismore » context the flaring activity detected from this source by Fermi-Large Area Telescope between 2010 June and 2012 February is very intriguing. We discuss the γ-ray emission of this source in the framework of the structured jet scenario, comparing the spectral energy distribution (SED) of the flaring state in 2010 June with that of a low state. We present three alternative models corresponding to three different choices of the viewing angles θv = 6°, 15°, and 20°. We obtain a good fit for the first two cases, but the SED obtained with θv = 15° if observed at a small angle does not resemble that of a typical blazar since the synchrotron emission should dominate by a large factor (~100) the inverse Compton component. This suggests that a viewing angle between 6° and 15° is preferred, with the rapid variability observed during γ-ray flares favouring a smaller angle. However, we cannot rule out that PKS 0521-36 is the misaligned counterpart of a synchrotron-dominated blazar.« less
NASA Astrophysics Data System (ADS)
Chen, J. M.; He, L.; Chou, S.; Ju, W.; Zhang, Y.; Joiner, J.; Liu, J.; Mo, G.
2017-12-01
Sun-induced chlorophyll fluorescence (SIF) measured from plant canopies originates mostly from sunlit leaves. Observations of SIF by satellite sensors, such as GOME-2 and GOSAT, are often made over large view zenith angle ranges, causing large changes in the viewed sunlit leaf fraction across the scanning swath. Although observations made by OCO-2 are near nadir, the observed sunlit leaf fraction could still vary greatly due to changes in the solar zenith angle with latitude and time of overpass. To demonstrate the importance of considering the satellite-target-view geometry in using SIF for assessing vegetation productivity, we conducted multi-angle measurements of SIF using a hyperspectral sensor mounted on an automated rotating system over a rice field near Nanjing, China. A method is developed to separate SIF measurements at each angle into sunlit and shaded leaf components, and an angularly normalized canopy-level SIF is obtained as the weighted sum of sunlit and shaded SIF. This normalized SIF is shown to be a much better proxy of GPP of the rice field measured by an eddy covariance system than the unnormalized SIF observations. The same normalization scheme is also applied to the far-red GOME-2 SIF observations on sunny days, and we found that the normalized SIF is better correlated with model-simulated GPP than the original SIF observations. The coefficient of determination (R2) is improved by 0.07±0.04 on global average using the normalization scheme. The most significant improvement in R2 by 0.09±0.04 is found in deciduous broadleaf forests, where the observed sunlit leaf fraction is highly sensitive to solar zenith angle.
Jo, Jaehyuck; Moon, Byung Gil; Lee, Joo Yong
2017-12-01
To report the outcome of scleral buckling using a non-contact wide-angle viewing system with a 25-gauge chandelier endoilluminator. Retrospective analyses of medical records were performed for 17 eyes of 16 patients with primary rhegmatogenous retinal detachment (RRD) without proliferative vitreoretinopathy who had undergone conventional scleral buckling with cryoretinopexy using the combination of a non-contact wide-angle viewing system and chandelier endoillumination. The patients were eight males and five females with a mean age of 26.8 ± 10.2 (range, 11 to 47) years. The mean follow-up period was 7.3 ± 3.1 months. Baseline best-corrected visual acuity was 0.23 ± 0.28 logarithm of the minimum angle of resolution units. Best-corrected visual acuity at the final visit showed improvement (0.20 ± 0.25 logarithm of the minimum angle of resolution units), but the improvement was not statistically significant (p = 0.722). As a surgery-related complication, there was vitreous loss at the end of surgery in one eye. As a postoperative complication, increased intraocular pressure (four cases) and herpes simplex epithelial keratitis (one case) were controlled postoperatively with eye drops. One case of persistent RRD after primary surgery needed additional vitrectomy, and the retina was postoperatively attached. Scleral buckling with chandelier illumination as a surgical technique for RRD has the advantages of relieving the surgeon's neck pain from prolonged use of the indirect ophthalmoscope and sharing the surgical procedure with another surgical team member. In addition, fine retinal breaks that are hard to identify using an indirect ophthalmoscope can be easily found under the microscope by direct endoillumination. © 2017 The Korean Ophthalmological Society
Melki, Lea; Costet, Alexandre; Konofagou, Elisa E
2017-10-01
Electromechanical wave imaging (EWI) is an ultrasound-based technique that can non-invasively map the transmural electromechanical activation in all four cardiac chambers in vivo. The objective of this study was to determine the reproducibility and angle independence of EWI for the assessment of electromechanical activation during normal sinus rhythm (NSR) in healthy humans. Acquisitions were performed transthoracically at 2000 frames/s on seven healthy human hearts in parasternal long-axis, apical four- and two-chamber views. EWI data was collected twice successively in each view in all subjects, while four successive acquisitions were obtained in one case. Activation maps were generated and compared (i) within the same acquisition across consecutive cardiac cycles; (ii) within same view across successive acquisitions; and (iii) within equivalent left-ventricular regions across different views. EWI was capable of characterizing electromechanical activation during NSR and of reliably obtaining similar patterns of activation. For consecutive heart cycles, the average 2-D correlation coefficient between the two isochrones across the seven subjects was 0.9893, with a mean average activation time fluctuation in LV wall segments across acquisitions of 6.19%. A mean activation time variability of 12% was obtained across different views with a measurement bias of only 3.2 ms. These findings indicate that EWI can map the electromechanical activation during NSR in human hearts in transthoracic echocardiography in vivo and results in reproducible and angle-independent activation maps. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Bai, Ling; Mai, Van Cuong; Lim, Yun; Hou, Shuai; Möhwald, Helmuth; Duan, Hongwei
2018-03-01
Structural colors originating from interaction of light with intricately arranged micro-/nanostructures have stimulated considerable interest because of their inherent photostability and energy efficiency. In particular, noniridescent structural color with wide viewing angle has been receiving increasing attention recently. However, no method is yet available for rapid and large-scale fabrication of full-spectrum structural color patterns with wide viewing angles. Here, infiltration-driven nonequilibrium assembly of colloidal particles on liquid-permeable and particle-excluding substrates is demonstrated to direct the particles to form amorphous colloidal arrays (ACAs) within milliseconds. The infiltration-assisted (IFAST) colloidal assembly opens new possibilities for rapid manufacture of noniridescent structural colors of ACAs and straightforward structural color mixing. Full-spectrum noniridescent structural colors are successfully produced by mixing primary structural colors of red, blue, and yellow using a commercial office inkjet printer. Rapid fabrication of large-scale structural color patterns with sophisticated color combination/layout by IFAST printing is realized. The IFAST technology is versatile for developing structural color patterns with wide viewing angles, as colloidal particles, inks, and substrates are flexibly designable for diverse applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A cylindrical specimen holder for electron cryo-tomography.
Palmer, Colin M; Löwe, Jan
2014-02-01
The use of slab-like flat specimens for electron cryo-tomography restricts the range of viewing angles that can be used. This leads to the "missing wedge" problem, which causes artefacts and anisotropic resolution in reconstructed tomograms. Cylindrical specimens provide a way to eliminate the problem, since they allow imaging from a full range of viewing angles around the tilt axis. Such specimens have been used before for tomography of radiation-insensitive samples at room temperature, but never for frozen-hydrated specimens. Here, we demonstrate the use of thin-walled carbon tubes as specimen holders, allowing the preparation of cylindrical frozen-hydrated samples of ribosomes, liposomes and whole bacterial cells. Images acquired from these cylinders have equal quality at all viewing angles, and the accessible tilt range is restricted only by the physical limits of the microscope. Tomographic reconstructions of these specimens demonstrate that the effects of the missing wedge are substantially reduced, and could be completely eliminated if a full tilt range was used. The overall quality of these tomograms is still lower than that obtained by existing methods, but improvements are likely in future. © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bolton, Matthew L.; Bass, Ellen J.; Comstock, James R., Jr.
2006-01-01
Synthetic Vision Systems (SVS) depict computer generated views of terrain surrounding an aircraft. In the assessment of textures and field of view (FOV) for SVS, no studies have directly measured the 3 levels of spatial awareness: identification of terrain, its relative spatial location, and its relative temporal location. This work introduced spatial awareness measures and used them to evaluate texture and FOV in SVS displays. Eighteen pilots made 4 judgments (relative angle, distance, height, and abeam time) regarding the location of terrain points displayed in 112 5-second, non-interactive simulations of a SVS heads down display. Texture produced significant main effects and trends for the magnitude of error in the relative distance, angle, and abeam time judgments. FOV was significant for the directional magnitude of error in the relative distance, angle, and height judgments. Pilots also provided subjective terrain awareness ratings that were compared with the judgment based measures. The study found that elevation fishnet, photo fishnet, and photo elevation fishnet textures best supported spatial awareness for both the judgments and the subjective awareness measures.
Nicaraguan Volcanoes, 26 February 2000
2000-04-19
The true-color image at left is a downward-looking (nadir) view of the area around the San Cristobal volcano, which erupted the previous day. This image is oriented with east at the top and north at the left. The right image is a stereo anaglyph of the same area, created from red band multi-angle data taken by the 45.6-degree aftward and 70.5-degree aftward cameras on the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. View this image through red/blue 3D glasses, with the red filter over the left eye. A plume from San Cristobal (approximately at image center) is much easier to see in the anaglyph, due to 3 effects: the long viewing path through the atmosphere at the oblique angles, the reduced reflection from the underlying water, and the 3D stereoscopic height separation. In this image, the plume floats between the surface and the overlying cumulus clouds. A second plume is also visible in the upper right (southeast of San Cristobal). This very thin plume may originate from the Masaya volcano, which is continually degassing at as low rate. The spatial resolution is 275 meters (300 yards). http://photojournal.jpl.nasa.gov/catalog/PIA02600
Li, Feihu; Tang, Bingtao; Wu, Suli; Zhang, Shufen
2017-01-01
The synthesis and assembly of monodispersed colloidal spheres are currently the subject of extensive investigation to fabricate artificial structural color materials. However, artificial structural colors from general colloidal crystals still suffer from the low color visibility and strong viewing angle dependence which seriously hinder their practical application in paints, colorimetric sensors, and color displays. Herein, monodispersed polysulfide (PSF) spheres with intrinsic high refractive index (as high as 1.858) and light-absorbing characteristics are designed, synthesized through a facile polycondensation and crosslinking process between sodium disulfide and 1,2,3-trichloropropane. Owing to their high monodispersity, sufficient surface charge, and good dispersion stability, the PSF spheres can be assembled into large-scale and high-quality 3D photonic crystals. More importantly, high structural color visibility and broad viewing angle are easily achieved because the unique features of PSF can remarkably enhance the relative reflectivity and eliminate the disturbance of scattering and background light. The results of this study provide a simple and efficient strategy to create structural colors with high color visibility, which is very important for their practical application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Landing of the Shuttle Discovery and end of STS 51-I mission
NASA Technical Reports Server (NTRS)
1985-01-01
Landing of the Shuttle Discovery and end of STS 51-I mission. Views include photo of Discovery's main landing gear just touching down, a cloud of dirt appearing behind it (225); Side view of the main landing gear touching down, the nose gear still above the runway (226); Aft-angle view of the Space Shuttle Discovery as it makes a successful landing (227).
A new illusion of projected three-dimensional space
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Grunwald, Arthur
1987-01-01
When perspective projections of orbital trajectories plotted in local-vertical local-horizontal coordinates are viewed with certain viewing angles, their appearance becomes perceptually unstable. They often lose their trochoidal appearance and reorganize as helices. This reorganization may be due to the viewer's familiarity with coiled springs.
Atmospheric Science Data Center
2013-04-19
article title: Hurricane Alex Disrupts Gulf Cleanup View Larger Image This view of Hurricane Alex in the western Gulf of Mexico was acquired by the Multi-angle ... Time on June 30, 2010. Around this time NOAA's National Hurricane Center reported Alex to be a strengthening Category 1 hurricane with ...
Optical Properties of Ice Particles in Young Contrails
NASA Technical Reports Server (NTRS)
Hong, Gang; Feng, Qian; Yang, Ping; Kattawar, George; Minnis, Patrick; Hu, Yong X.
2008-01-01
The single-scattering properties of four types of ice crystals (pure ice crystals, ice crystals with an internal mixture of ice and black carbon, ice crystals coated with black carbon, and soot coated with ice) in young contrails are investigated at wavelengths 0.65 and 2.13 micrometers using Mie codes from coated spheres. The four types of ice crystals have distinct differences in their single-scattering properties because of the embedded black carbon. The bulk scattering properties of young contrails consisting of the four types of ice crystals are further investigated by averaging their single-scattering properties over a typical ice particle size distribution found in young contrails. The effect of the radiative properties of the four types of ice particles on the Stokes parameters I, Q, U, and V is also investigated for different viewing zenith angles and relative azimuth angles with a solar zenith angle of 30 degrees using a vector radiative transfer model based on the adding-doubling technique. The Stokes parameters at a wavelength of 0.65 micrometers show pronounced differences for the four types of ice crystals. Those at a wavelength of 2.13 micrometers show similar variations with the viewing zenith angle and relative azimuth angle, but their values are noticeably different.
Adjustable Bracket For Entry Of Welding Wire
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.; Gutow, David A.
1993-01-01
Wire-entry bracket on welding torch in robotic welding system provides for adjustment of angle of entry of welding wire over range of plus or minus 30 degrees from nominal entry angle. Wire positioned so it does not hide weld joint in view of through-the-torch computer-vision system part of robot-controlling and -monitoring system. Swiveling bracket also used on nonvision torch on which wire-feed-through tube interferes with workpiece. Angle simply changed to one giving sufficient clearance.
Strong Pitch-Angle Diffusion of Ring Current Ions in Geomagnetic Storm-Associated Conditions
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Gamayunov, K. V.; Gallagher, D. L.; Spann, J. F.
2005-01-01
Do electromagnetic ion cyclotron (EMIC) waves cause strong pitch-angle diffusion of RC ions? This question is the primary motivation of this paper and has been affirmatively answered from the theoretical point of view. The materials that are presented in the Results section show clear evidence that strong pitch-angle diffusion takes place in the inner magnetosphere indicating an important role for the wave-particle interaction mechanism in the formation of RC ions and EMIC waves.
Development of scanning holographic display using MEMS SLM
NASA Astrophysics Data System (ADS)
Takaki, Yasuhiro
2016-10-01
Holography is an ideal three-dimensional (3D) display technique, because it produces 3D images that naturally satisfy human 3D perception including physiological and psychological factors. However, its electronic implementation is quite challenging because ultra-high resolution is required for display devices to provide sufficient screen size and viewing zone. We have developed holographic display techniques to enlarge the screen size and the viewing zone by use of microelectromechanical systems spatial light modulators (MEMS-SLMs). Because MEMS-SLMs can generate hologram patterns at a high frame rate, the time-multiplexing technique is utilized to virtually increase the resolution. Three kinds of scanning systems have been combined with MEMS-SLMs; the screen scanning system, the viewing-zone scanning system, and the 360-degree scanning system. The screen scanning system reduces the hologram size to enlarge the viewing zone and the reduced hologram patterns are scanned on the screen to increase the screen size: the color display system with a screen size of 6.2 in. and a viewing zone angle of 11° was demonstrated. The viewing-zone scanning system increases the screen size and the reduced viewing zone is scanned to enlarge the viewing zone: a screen size of 2.0 in. and a viewing zone angle of 40° were achieved. The two-channel system increased the screen size to 7.4 in. The 360-degree scanning increases the screen size and the reduced viewing zone is scanned circularly: the display system having a flat screen with a diameter of 100 mm was demonstrated, which generates 3D images viewed from any direction around the flat screen.
NASA Astrophysics Data System (ADS)
Wang, Ling; Hu, Xiuqing; Chen, Lin
2017-09-01
Calibration is a critical step to ensure data quality and to meet the requirement of quantitative remote sensing in a broad range of scientific applications. One of the least expensive and increasingly popular methods of on-orbit calibration is the use of pseudo invariant calibration sites (PICS). A spatial homogenous and temporally stable area of 34 km2 in size around the center of Kunlun Mountain (KLM) over Tibetan Plateau (TP) was identified by our previous study. The spatial and temporal coefficient of variation (CV) this region was better than 4% for the reflective solar bands. In this study, the BRDF impacts of KLM glacier on MODIS observed TOA reflectance in band 1 (659 nm) are examined. The BRDF impact of KLM glacier with respect to the view zenith angle is studied through using the observations at a fixed solar zenith angle, and the effect with respect to the sun zenith angle is studied based on the observations collected at the same view angle. Then, the two widely used BRDF models are applied to our test data to simulate the variations of TOA reflectance due to the changes in viewing geometry. The first one is Ross-Li model, which has been used to produce the MODIS global BRDF albedo data product. The second one is snow surface BRDF model, which has been used to characterize the bidirectional reflectance of Antarctic snow. Finally, the accuracy and effectiveness of these two different BRDF models are tested through comparing the model of simulated TOA reflectance with the observed one. The results show that variations of the reflectances at a fixed solar zenith angle are close to the lambertian pattern, while those at a fixed sensor zenith angle are strongly anisotropic. A decrease in solar zenith angle from 50º to 20º causes an increase in reflectance by the level of approximated 50%. The snow surface BRDF model performs much better than the Ross-Li BRDF model to re-produce the Bi-Directional Reflectance of KLM glacier. The RMSE of snow surface BRDF model is 3.60%, which is only half of the RMSE when using Ross-Li model.
NASA Astrophysics Data System (ADS)
Zhang, Qian; Chen, Jing; Zhang, Yongguang; Qiu, Feng; Fan, Weiliang; Ju, Weimin
2017-04-01
The gross primary production (GPP) of terrestrial ecosystems constitutes the largest global land carbon flux and exhibits significant spatial and temporal variations. Due to its wide spatial coverage, remote sensing technology is shown to be useful for improving the estimation of GPP in combination with light use efficiency (LUE) models. Accurate estimation of LUE is essential for calculating GPP using remote sensing data and LUE models at regional and global scales. A promising method used for estimating LUE is the photochemical reflectance index (PRI = (R531-R570)/(R531 + R570), where R531 and R570 are reflectance at wavelengths 531 and 570 nm) through remote sensing. However, it has been documented that there are certain issues with PRI at the canopy scale, which need to be considered systematically. For this purpose, an improved tower-based automatic canopy multi-angle hyperspectral observation system was established at the Qianyanzhou flux station in China since January of 2013. In each 15-minute observation cycle, PRI was observed at four view zenith angles fixed at solar zenith angle and (37°, 47°, 57°) or (42°, 52°, 62°) in the azimuth angle range from 45° to 325° (defined from geodetic north). To improve the ability of directional PRI observation to track canopy LUE, the canopy is treated as two-big leaves, i.e. sunlit and shaded leaves. On the basis of a geometrical optical model, the observed canopy reflectance for each view angle is separated to four components, i.e. sunlit and shaded leaves and sunlit and shaded backgrounds. To determine the fractions of these four components at each view angle, three models based on different theories are tested for simulating the fraction of sunlit leaves. Finally, a ratio of canopy reflectance to leaf reflectance is used to represent the fraction of sunlit leaves, and the fraction of shaded leaves is calculated with the four-scale geometrical optical model. Thus, sunlit and shaded PRI are estimated using the least squares regression with multi-angle observations. In both the half-hourly and daily time steps, the canopy-level two-leaf PRI (PRIt) can effectively enhance (>50% and >35%, respectively) the correlation between PRI and LUE derived from the tower flux measurements over the big-leaf PRI (PRIb) taken as the arithmetic average of the multi-angle measurements in a given time interval. PRIt is very effective in detecting the low-moderate drought stress on LUE at half-hourly time steps, while ineffective in detecting severe atmospheric water and heat stresses, which is probably due to alternative radiative energy sink, i.e. photorespiration. Overall, the two-leaf approach well overcomes some external effects (e.g. sun-target-view geometry) that interfere with PRI signals.
Method and systems for collecting data from multiple fields of view
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K. (Inventor)
2002-01-01
Systems and methods for processing light from multiple fields (48, 54, 55) of view without excessive machinery for scanning optical elements. In an exemplary embodiment of the invention, multiple holographic optical elements (41, 42, 43, 44, 45), integrated on a common film (4), diffract and project light from respective fields of view.
ERIC Educational Resources Information Center
Dreher, Anika; Kuntze, Sebastian; Lerman, Stephen
2016-01-01
Dealing with multiple representations and their connections plays a key role for learners to build up conceptual knowledge in the mathematics classroom. Hence, professional knowledge and views of mathematics teachers regarding the use of multiple representations certainly merit attention. In particular, investigating such views of preservice…
Nguyen, Dorothy; Vedamurthy, Indu; Schor, Clifton
2008-03-01
Accommodation and convergence systems are cross-coupled so that stimulation of one system produces responses by both systems. Ideally, the cross-coupled responses of accommodation and convergence match their respective stimuli. When expressed in diopters and meter angles, respectively, stimuli for accommodation and convergence are equal in the mid-sagittal plane when viewed with symmetrical convergence, where historically, the gains of the cross coupling (AC/A and CA/C ratios) have been quantified. However, targets at non-zero azimuth angles, when viewed with asymmetric convergence, present unequal stimuli for accommodation and convergence. Are the cross-links between the two systems calibrated to compensate for stimulus mismatches that increase with gaze-azimuth? We measured the response AC/A and stimulus CA/C ratios at zero azimuth, 17.5 and 30 deg of rightward gaze eccentricities with a Badal Optometer and Wheatstone-mirror haploscope. AC/A ratios were measured under open-loop convergence conditions along the iso-accommodation circle (locus of points that stimulate approximately equal amounts of accommodation to the two eyes at all azimuth angles). CA/C ratios were measured under open-loop accommodation conditions along the iso-vergence circle (locus of points that stimulate constant convergence at all azimuth angles). Our results show that the gain of accommodative-convergence (AC/A ratio) decreased and the bias of convergence-accommodation increased at the 30 deg gaze eccentricity. These changes are in directions that compensate for stimulus mismatches caused by spatial-viewing geometry during asymmetric convergence.
Thilak, Vimal; Voelz, David G; Creusere, Charles D
2007-10-20
A passive-polarization-based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. Such systems can be useful in many remote sensing applications including target detection, object segmentation, and material classification. We present a method to jointly estimate the complex index of refraction and the reflection angle (reflected zenith angle) of a target from multiple measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the plane of incidence. Using this expression, we develop a nonlinear least-squares estimation algorithm for extracting an apparent index of refraction and the reflection angle from a set of polarization measurements collected from multiple source positions. Computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle.
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Voelz, David G.; Creusere, Charles D.
2007-10-01
A passive-polarization-based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. Such systems can be useful in many remote sensing applications including target detection, object segmentation, and material classification. We present a method to jointly estimate the complex index of refraction and the reflection angle (reflected zenith angle) of a target from multiple measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the plane of incidence. Using this expression, we develop a nonlinear least-squares estimation algorithm for extracting an apparent index of refraction and the reflection angle from a set of polarization measurements collected from multiple source positions. Computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle.
NASA Astrophysics Data System (ADS)
Smith, Brandon M.; Stork, David G.; Zhang, Li
2009-01-01
The problem of reconstructing a three-dimensional scene from single or multiple views has been thoroughly studied in the computer vision literature, and recently has been applied to problems in the history of art. Criminisi pioneered the application of single-view metrology to reconstructing the fictive spaces in Renaissance paintings, such as the vault in Masaccio's Trinità and the plaza in Piero della Francesca's Flagellazione. While the vast majority of realist paintings provide but a single view, some provide multiple views, through mirrors depicted within their tableaus. The contemporary American realist Scott Fraser's Three way vanitas is a highly realistic still-life containing three mirrors; each mirror provides a new view of the objects in the tableau. We applied multiple-view reconstruction methods to the direct image and the images reflected by these mirrors to reconstruct the three-dimensional tableau. Our methods estimate virtual viewpoints for each view using the geometric constraints provided by the direct view of the mirror frames, along with the reflected images themselves. Moreover, our methods automatically discover inconsistencies between the different views, including ones that might elude careful scrutiny by eye, for example the fact that the height of the water in the glass differs between the direct view and that in the mirror at the right. We believe our work provides the first application of multiple-view reconstruction to a single painting and will have application to other paintings and questions in the history of art.
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
NASA Astrophysics Data System (ADS)
Hsieh, Yi-Kai; Omura, Yoshiharu
2017-10-01
We investigate the properties of whistler mode wave-particle interactions at oblique wave normal angles to the background magnetic field. We find that electromagnetic energy of waves at frequencies below half the electron cyclotron frequency can flow nearly parallel to the ambient magnetic field. We thereby confirm that the gyroaveraging method, which averages the cyclotron motion to the gyrocenter and reduces the simulation from two-dimensional to one-dimensional, is valid for oblique wave-particle interaction. Multiple resonances appear for oblique propagation but not for parallel propagation. We calculate the possible range of resonances with the first-order resonance condition as a function of electron kinetic energy and equatorial pitch angle. To reveal the physical process and the efficiency of electron acceleration by multiple resonances, we assume a simple uniform wave model with constant amplitude and frequency in space and time. We perform test particle simulations with electrons starting at specific equatorial pitch angles and kinetic energies. The simulation results show that multiple resonances contribute to acceleration and pitch angle scattering of energetic electrons. Especially, we find that electrons with energies of a few hundred keV can be accelerated efficiently to a few MeV through the n = 0 Landau resonance.
Alward, Wallace L M
2011-01-01
The first view of the iridocorneal angle in a living human occurred accidentally in the late 1800s. Lenses were first used to see the angle in 1914, but practical gonioscopy would not come into existence for many years as the slitlamp and lenses that could be used at the slitlamp were developed. This article reviews the history of gonioscopy.
Master Volunteer Life Cycle: A Wide Angle Lens on the Volunteer Experience
ERIC Educational Resources Information Center
Strauss, Andrea Lorek; Rager, Amy
2017-01-01
Extension master volunteer programs, such as master naturalist and master gardener, often focus heavily on volunteer education. The model presented here describes the full life cycle of a master volunteer's experience in the program, putting education in the context of other essential program components. By zooming out to a wide-angle view of the…
6. Elevation view of east side of southernmost end of ...
6. Elevation view of east side of southernmost end of building. When joined with photo WA-116-A-7, these photos give a virtually complete elevation view of the east side of the 1896 south section of Building 59. Note that the steep angle of view gives the illusion of a flat roof. For a more accurate depiction of the roof slope, see previous photo's including WA-116-5. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA
Telepractice: A Wide-Angle View for Persons with Hearing Loss
ERIC Educational Resources Information Center
Cohn, Ellen R.; Cason, Jana
2012-01-01
This paper presents the current status of telepractice as a service delivery model for persons with hearing loss. Telepractice can be broadly viewed as the delivery of preventative, habilitation, or rehabilitation services through telecommunications technology. Telemedicine and telehealth are closely aligned to telepractice, often with overlapping…
INTERIOR VIEW, PASSAGE AND DOOR LETTING ONTO THE SOUTHEAST BED ...
INTERIOR VIEW, PASSAGE AND DOOR LETTING ONTO THE SOUTHEAST BED CHAMBER. THE ANGLED PASSAGE RUNS PARALLEL TO WHAT WAS AN EXTERIOR WALL OF THE THREE-SIDED WINDOW BOW PRESENT IN THE HOUSES ORIGINAL CA. 1770 STATE - The Woodlands, 4000 Woodlands Avenue, Philadelphia, Philadelphia County, PA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y; Yin, F; Ren, L
Purpose: To develop an adaptive prior knowledge based image estimation method to reduce the scan angle needed in the LIVE system to reconstruct 4D-CBCT for intrafraction verification. Methods: The LIVE system has been previously proposed to reconstructs 4D volumetric images on-the-fly during arc treatment for intrafraction target verification and dose calculation. This system uses limited-angle beam’s eye view (BEV) MV cine images acquired from the treatment beam together with the orthogonally acquired limited-angle kV projections to reconstruct 4D-CBCT images for target verification during treatment. In this study, we developed an adaptive constrained free-form deformation reconstruction technique in LIVE to furthermore » reduce the scanning angle needed to reconstruct the CBCT images. This technique uses free form deformation with energy minimization to deform prior images to estimate 4D-CBCT based on projections acquired in limited angle (orthogonal 6°) during the treatment. Note that the prior images are adaptively updated using the latest CBCT images reconstructed by LIVE during treatment to utilize the continuity of patient motion.The 4D digital extended-cardiac-torso (XCAT) phantom was used to evaluate the efficacy of this technique with LIVE system. A lung patient was simulated with different scenario, including baseline drifts, amplitude change and phase shift. Limited-angle orthogonal kV and beam’s eye view (BEV) MV projections were generated for each scenario. The CBCT reconstructed by these projections were compared with the ground-truth generated in XCAT.Volume-percentage-difference (VPD) and center-of-mass-shift (COMS) were calculated between the reconstructed and the ground-truth tumors to evaluate the reconstruction accuracy. Results: Using orthogonal-view of 6° kV and BEV- MV projections, the VPD/COMS values were 12.7±4.0%/0.7±0.5 mm, 13.0±5.1%/0.8±0.5 mm, and 11.4±5.4%/0.5±0.3 mm for the three scenarios, respectively. Conclusion: The technique enables LIVE to accurately reconstruct 4D-CBCT images using only orthogonal 6° angle, which greatly improves the efficiency and reduces dose of LIVE for intrafraction verification.« less
Bright color optical switching device by polymer network liquid crystal with a specular reflector.
Lee, Gae Hwang; Hwang, Kyu Young; Jang, Jae Eun; Jin, Yong Wan; Lee, Sang Yoon; Jung, Jae Eun
2011-07-04
The color optical switching device by polymer network liquid crystal (PNLC) with color filter on a specular reflector shows excellent performance; white reflectance of 22%, color gamut of 32%, and contrast ratio up to 50:1 in reflective mode measurement. The view-angle dependence of the reflectance can be adjusted by changing the PNLC thickness. The color chromaticity shown by the device is close to the limit value of color filters, and its value nearly remains with respect to the operating voltage. These optical properties of the device can be explained from the prediction based on multiple interactions between the light and the droplets of liquid crystal. The high reflectance, vivid color image, and moderate responds time allow the PNLC device to drive good color moving image. It can widely extend the applications of the reflective device.
Multiple magnetic scattering in small-angle neutron scattering of Nd-Fe-B nanocrystalline magnet.
Ueno, Tetsuro; Saito, Kotaro; Yano, Masao; Ito, Masaaki; Shoji, Tetsuya; Sakuma, Noritsugu; Kato, Akira; Manabe, Akira; Hashimoto, Ai; Gilbert, Elliot P; Keiderling, Uwe; Ono, Kanta
2016-06-20
We have investigated the influence of multiple scattering on the magnetic small-angle neutron scattering (SANS) from a Nd-Fe-B nanocrystalline magnet. We performed sample-thickness- and neutron-wavelength-dependent SANS measurements, and observed the scattering vector dependence of the multiple magnetic scattering. It is revealed that significant multiple scattering exists in the magnetic scattering rather than the nuclear scattering of Nd-Fe-B nanocrystalline magnet. It is considered that the mean free path of the neutrons for magnetic scattering is rather short in Nd-Fe-B magnets. We analysed the SANS data by the phenomenological magnetic correlation model considering the magnetic microstructures and obtained the microstructural parameters.
Multiple magnetic scattering in small-angle neutron scattering of Nd–Fe–B nanocrystalline magnet
Ueno, Tetsuro; Saito, Kotaro; Yano, Masao; Ito, Masaaki; Shoji, Tetsuya; Sakuma, Noritsugu; Kato, Akira; Manabe, Akira; Hashimoto, Ai; Gilbert, Elliot P.; Keiderling, Uwe; Ono, Kanta
2016-01-01
We have investigated the influence of multiple scattering on the magnetic small-angle neutron scattering (SANS) from a Nd–Fe–B nanocrystalline magnet. We performed sample-thickness- and neutron-wavelength-dependent SANS measurements, and observed the scattering vector dependence of the multiple magnetic scattering. It is revealed that significant multiple scattering exists in the magnetic scattering rather than the nuclear scattering of Nd–Fe–B nanocrystalline magnet. It is considered that the mean free path of the neutrons for magnetic scattering is rather short in Nd–Fe–B magnets. We analysed the SANS data by the phenomenological magnetic correlation model considering the magnetic microstructures and obtained the microstructural parameters. PMID:27321149
Harnessing Genetic Variation in Leaf Angle to Increase Productivity of Sorghum bicolor
Truong, Sandra K.; McCormick, Ryan F.; Rooney, William L.; Mullet, John E.
2015-01-01
The efficiency with which a plant intercepts solar radiation is determined primarily by its architecture. Understanding the genetic regulation of plant architecture and how changes in architecture affect performance can be used to improve plant productivity. Leaf inclination angle, the angle at which a leaf emerges with respect to the stem, is a feature of plant architecture that influences how a plant canopy intercepts solar radiation. Here we identify extensive genetic variation for leaf inclination angle in the crop plant Sorghum bicolor, a C4 grass species used for the production of grain, forage, and bioenergy. Multiple genetic loci that regulate leaf inclination angle were identified in recombinant inbred line populations of grain and bioenergy sorghum. Alleles of sorghum dwarf-3, a gene encoding a P-glycoprotein involved in polar auxin transport, are shown to change leaf inclination angle by up to 34° (0.59 rad). The impact of heritable variation in leaf inclination angle on light interception in sorghum canopies was assessed using functional-structural plant models and field experiments. Smaller leaf inclination angles caused solar radiation to penetrate deeper into the canopy, and the resulting redistribution of light is predicted to increase the biomass yield potential of bioenergy sorghum by at least 3%. These results show that sorghum leaf angle is a heritable trait regulated by multiple loci and that genetic variation in leaf angle can be used to modify plant architecture to improve sorghum crop performance. PMID:26323882
Raj, Retheep; Sivanandan, K S
2017-01-01
Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.
Multi-angle VECSEL cavities for dispersion control and multi-color operation
NASA Astrophysics Data System (ADS)
Baker, Caleb; Scheller, Maik; Laurain, Alexandre; Yang, Hwang-Jye; Ruiz Perez, Antje; Stolz, Wolfgang; Addamane, Sadhvikas J.; Balakrishnan, Ganesh; Jones, R. Jason; Moloney, Jerome V.
2017-02-01
We present a novel Vertical External Cavity Surface Emitting Laser (VECSEL) cavity design which makes use of multiple interactions with the gain region under different angles of incidence in a single round trip. This design allows for optimization of the net, round-trip Group Delay Dispersion (GDD) by shifting the GDD of the gain via cavity fold angle while still maintaining the high gain of resonant structures. The effectiveness of this scheme is demonstrated with femtosecond-regime pulses from a resonant structure and record pulse energies for the VECSEL gain medium. In addition, we show that the interference pattern of the intracavity mode within the active region, resulting from the double-angle multifold, is advantageous for operating the laser in CW on multiple wavelengths simultaneously. Power, noise, and mode competition characterization is presented.
A beam-splitter-type 3-D endoscope for front view and front-diagonal view images.
Kamiuchi, Hiroki; Masamune, Ken; Kuwana, Kenta; Dohi, Takeyoshi; Kim, Keri; Yamashita, Hiromasa; Chiba, Toshio
2013-01-01
In endoscopic surgery, surgeons must manipulate an endoscope inside the body cavity to observe a large field-of-view while estimating the distance between surgical instruments and the affected area by reference to the size or motion of the surgical instruments in 2-D endoscopic images on a monitor. Therefore, there is a risk of the endoscope or surgical instruments physically damaging body tissues. To overcome this problem, we developed a Ø7- mm 3-D endoscope that can switch between providing front and front-diagonal view 3-D images by simply rotating its sleeves. This 3-D endoscope consists of a conventional 3-D endoscope and an outer and inner sleeve with a beam splitter and polarization plates. The beam splitter was used for visualizing both the front and front-diagonal view and was set at 25° to the outer sleeve's distal end in order to eliminate a blind spot common to both views. Polarization plates were used to avoid overlap of the two views. We measured signal-to-noise ratio (SNR), sharpness, chromatic aberration (CA), and viewing angle of this 3-D endoscope and evaluated its feasibility in vivo. Compared to the conventional 3-D endoscope, SNR and sharpness of this 3-D endoscope decreased by 20 and 7 %, respectively. No significant difference was found in CA. The viewing angle for both the front and front-diagonal views was about 50°. In the in vivo experiment, this 3-D endoscope can provide clear 3-D images of both views by simply rotating its inner sleeve. The developed 3-D endoscope can provide the front and front-diagonal view by simply rotating the inner sleeve, therefore the risk of damage to fragile body tissues can be significantly decreased.
Angular distribution of diffuse reflectance from incoherent multiple scattering in turbid media.
Gao, M; Huang, X; Yang, P; Kattawar, G W
2013-08-20
The angular distribution of diffuse reflection is elucidated with greater understanding by studying a homogeneous turbid medium. We modeled the medium as an infinite slab and studied the reflection dependence on the following three parameters: the incident direction, optical depth, and asymmetry factor. The diffuse reflection is produced by incoherent multiple scattering and is solved through radiative transfer theory. At large optical depths, the angular distribution of the diffuse reflection with small incident angles is similar to that of a Lambertian surface, but, with incident angles larger than 60°, the angular distributions have a prominent reflection peak around the specular reflection angle. These reflection peaks are found originating from the scattering within one transport mean free path in the top layer of the medium. The maximum reflection angles for different incident angles are analyzed and can characterize the structure of angular distributions for different asymmetry factors and optical depths. The properties of the angular distribution can be applied to more complex systems for a better understanding of diffuse reflection.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
Comparing artistic and geometrical perspective depictions of space in the visual field
Baldwin, Joseph; Burleigh, Alistair; Pepperell, Robert
2014-01-01
Which is the most accurate way to depict space in our visual field? Linear perspective, a form of geometrical perspective, has traditionally been regarded as the correct method of depicting visual space. But artists have often found it is limited in the angle of view it can depict; wide-angle scenes require uncomfortably close picture viewing distances or impractical degrees of enlargement to be seen properly. Other forms of geometrical perspective, such as fisheye projections, can represent wider views but typically produce pictures in which objects appear distorted. In this study we created an artistic rendering of a hemispherical visual space that encompassed the full visual field. We compared it to a number of geometrical perspective projections of the same space by asking participants to rate which best matched their visual experience. We found the artistic rendering performed significantly better than the geometrically generated projections. PMID:26034563
Comparing artistic and geometrical perspective depictions of space in the visual field.
Baldwin, Joseph; Burleigh, Alistair; Pepperell, Robert
2014-01-01
Which is the most accurate way to depict space in our visual field? Linear perspective, a form of geometrical perspective, has traditionally been regarded as the correct method of depicting visual space. But artists have often found it is limited in the angle of view it can depict; wide-angle scenes require uncomfortably close picture viewing distances or impractical degrees of enlargement to be seen properly. Other forms of geometrical perspective, such as fisheye projections, can represent wider views but typically produce pictures in which objects appear distorted. In this study we created an artistic rendering of a hemispherical visual space that encompassed the full visual field. We compared it to a number of geometrical perspective projections of the same space by asking participants to rate which best matched their visual experience. We found the artistic rendering performed significantly better than the geometrically generated projections.
Multiple-Flat-Panel System Displays Multidimensional Data
NASA Technical Reports Server (NTRS)
Gundo, Daniel; Levit, Creon; Henze, Christopher; Sandstrom, Timothy; Ellsworth, David; Green, Bryan; Joly, Arthur
2006-01-01
The NASA Ames hyperwall is a display system designed to facilitate the visualization of sets of multivariate and multidimensional data like those generated in complex engineering and scientific computations. The hyperwall includes a 77 matrix of computer-driven flat-panel video display units, each presenting an image of 1,280 1,024 pixels. The term hyperwall reflects the fact that this system is a more capable successor to prior computer-driven multiple-flat-panel display systems known by names that include the generic term powerwall and the trade names PowerWall and Powerwall. Each of the 49 flat-panel displays is driven by a rack-mounted, dual-central-processing- unit, workstation-class personal computer equipped with a hig-hperformance graphical-display circuit card and with a hard-disk drive having a storage capacity of 100 GB. Each such computer is a slave node in a master/ slave computing/data-communication system (see Figure 1). The computer that acts as the master node is similar to the slave-node computers, except that it runs the master portion of the system software and is equipped with a keyboard and mouse for control by a human operator. The system utilizes commercially available master/slave software along with custom software that enables the human controller to interact simultaneously with any number of selected slave nodes. In a powerwall, a single rendering task is spread across multiple processors and then the multiple outputs are tiled into one seamless super-display. It must be noted that the hyperwall concept subsumes the powerwall concept in that a single scene could be rendered as a mosaic image on the hyperwall. However, the hyperwall offers a wider set of capabilities to serve a different purpose: The hyperwall concept is one of (1) simultaneously displaying multiple different but related images, and (2) providing means for composing and controlling such sets of images. In place of elaborate software or hardware crossbar switches, the hyperwall concept substitutes reliance on the human visual system for integration, synthesis, and discrimination of patterns in complex and high-dimensional data spaces represented by the multiple displayed images. The variety of multidimensional data sets that can be displayed on the hyperwall is practically unlimited. For example, Figure 2 shows a hyperwall display of surface pressures and streamlines from a computational simulation of airflow about an aerospacecraft at various Mach numbers and angles of attack. In this display, Mach numbers increase from left to right and angles of attack increase from bottom to top. That is, all images in the same column represent simulations at the same Mach number, while all images in the same row represent simulations at the same angle of attack. The same viewing transformations and the same mapping from surface pressure to colors were used in generating all the images.
Spinning angle optical calibration apparatus
Beer, Stephen K.; Pratt, II, Harold R.
1991-01-01
An optical calibration apparatus is provided for calibrating and reproducing spinning angles in cross-polarization, nuclear magnetic resonance spectroscopy. An illuminated magnifying apparatus enables optical setting an accurate reproducing of spinning "magic angles" in cross-polarization, nuclear magnetic resonance spectroscopy experiments. A reference mark scribed on an edge of a spinning angle test sample holder is illuminated by a light source and viewed through a magnifying scope. When the "magic angle" of a sample material used as a standard is attained by varying the angular position of the sample holder, the coordinate position of the reference mark relative to a graduation or graduations on a reticle in the magnifying scope is noted. Thereafter, the spinning "magic angle" of a test material having similar nuclear properties to the standard is attained by returning the sample holder back to the originally noted coordinate position.
Determination of Ice Cloud Models Using MODIS and MISR Data
NASA Technical Reports Server (NTRS)
Xie, Yu; Yang, Ping; Kattawar, George W.; Minnis, Patrick; Hu, Yongxiang; Wu, Dong L.
2012-01-01
Representation of ice clouds in radiative transfer simulations is subject to uncertainties associated with the shapes and sizes of ice crystals within cirrus clouds. In this study, we examined several ice cloud models consisting of smooth, roughened, homogeneous and inhomogeneous hexagonal ice crystals with various aspect ratios. The sensitivity of the bulk scattering properties and solar reflectances of cirrus clouds to specific ice cloud models is investigated using the improved geometric optics method (IGOM) and the discrete ordinates radiative transfer (DISORT) model. The ice crystal habit fractions in the ice cloud model may significantly affect the simulations of cloud reflectances. A new algorithm was developed to help determine an appropriate ice cloud model for application to the satellite-based retrieval of ice cloud properties. The ice cloud particle size retrieved from Moderate Resolution Imaging Spectroradiometer (MODIS) data, collocated with Multi-angle Imaging Spectroradiometer (MISR) observations, is used to infer the optical thicknesses of ice clouds for nine MISR viewing angles. The relative differences between view-dependent cloud optical thickness and the averaged value over the nine MISR viewing angles can vary from -0.5 to 0.5 and are used to evaluate the ice cloud models. In the case for 2 July 2009, the ice cloud model with mixed ice crystal habits is the best fit to the observations (the root mean square (RMS) error of cloud optical thickness reaches 0.365). This ice cloud model also produces consistent cloud property retrievals for the nine MISR viewing configurations within the measurement uncertainties.
Grooves and Kinks in the Rings
2017-06-19
Many of the features seen in Saturn's rings are shaped by the planet's moons. This view from NASA's Cassini spacecraft shows two different effects of moons that cause waves in the A ring and kinks in a faint ringlet. The view captures the outer edge of the 200-mile-wide (320-kilometer-wide) Encke Gap, in the outer portion of Saturn's A ring. This is the same region features the large propeller called Earhart. Also visible here is one of several kinked and clumpy ringlets found within the gap. Kinks and clumps in the Encke ringlet move about, and even appear and disappear, in part due to the gravitational effects of Pan -- which orbits in the gap and whose gravitational influence holds it open. The A ring, which takes up most of the image on the left side, displays wave features caused by Pan, as well as the moons Pandora and Prometheus, which orbit a bit farther from Saturn on both sides of the planet's F ring. This view was taken in visible light with the Cassini spacecraft narrow-angle camera on March 22, 2017, and looks toward the sunlit side of the rings from about 22 degrees above the ring plane. The view was acquired at a distance of approximately 63,000 miles (101,000 kilometers) from Saturn and at a phase angle (the angle between the sun, the rings and the spacecraft) of 59 degrees. Image scale is 1,979 feet (603 meters) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21333
A contact angle hysteresis model based on the fractal structure of contact line.
Wu, Shuai; Ma, Ming
2017-11-01
Contact angle is one of the most popular concept used in fields such as wetting, transport and microfludics. In practice, different contact angles such as equilibrium, receding and advancing contact angles are observed due to hysteresis. The connection among these contact angles is important in revealing the chemical and physical properties of surfaces related to wetting. Inspired by the fractal structure of contact line, we propose a single parameter model depicting the connection of the three angles. This parameter is decided by the fractal structure of the contact line. The results of this model agree with experimental observations. In certain cases, it can be reduced to other existing models. It also provides a new point of view in understanding the physical nature of the contact angle hysteresis. Interestingly, some counter-intuitive phenomena, such as the binary receding angles, are indicated in this model, which are waited to be validated by experiments. Copyright © 2017 Elsevier Inc. All rights reserved.
Upper wide-angle viewing system for ITER.
Lasnier, C J; McLean, A G; Gattuso, A; O'Neill, R; Smiley, M; Vasquez, J; Feder, R; Smith, M; Stratton, B; Johnson, D; Verlaan, A L; Heijmans, J A C
2016-11-01
The Upper Wide Angle Viewing System (UWAVS) will be installed on five upper ports of ITER. This paper shows major requirements, gives an overview of the preliminary design with reasons for some design choices, examines self-emitted IR light from UWAVS optics and its effect on accuracy, and shows calculations of signal-to-noise ratios for the two-color temperature output as a function of integration time and divertor temperature. Accurate temperature output requires correction for vacuum window absorption vs. wavelength and for self-emitted IR, which requires good measurement of the temperature of the optical components. The anticipated signal-to-noise ratio using presently available IR cameras is adequate for the required 500 Hz frame rate.
Three-dimensional liquid flattened Luneburg lens with ultra-wide viewing angle and frequency band
NASA Astrophysics Data System (ADS)
Wu, Lingling; Tian, Xiaoyong; Yin, Ming; Li, Dichen; Tang, Yiping
2013-08-01
Traditional Luneburg lens is a dielectric spherical antenna. It can focus the incoming collimated electromagnetic waves on its spherical surface, which causes the incompatibility with the planar feeding and receiving devices. Furthermore, the difficulties in the fabrication process also limited its applications. In this paper, a three-dimensional flattened Luneburg lens with a field-of-view angle up to 180° has been realized based on a liquid medium approach and a 3D-printing process. The fabricated three-dimensional lens showed a broadband transmission characteristic from 12.4 GHz to 18 GHz. The performance of the proposed lens was demonstrated by simulation and experimental results.
Siddique, Radwanul Hasan; Gomard, Guillaume; Hölscher, Hendrik
2015-04-22
The glasswing butterfly (Greta oto) has, as its name suggests, transparent wings with remarkable low haze and reflectance over the whole visible spectral range even for large view angles of 80°. This omnidirectional anti-reflection behaviour is caused by small nanopillars covering the transparent regions of its wings. In difference to other anti-reflection coatings found in nature, these pillars are irregularly arranged and feature a random height and width distribution. Here we simulate the optical properties with the effective medium theory and transfer matrix method and show that the random height distribution of pillars significantly reduces the reflection not only for normal incidence but also for high view angles.
Accommodation measurements of horizontally scanning holographic display.
Takaki, Yasuhiro; Yokouchi, Masahito
2012-02-13
Eye accommodation is considered to function properly for three-dimensional (3D) images generated by holography. We developed a horizontally scanning holographic display technique that enlarges both the screen size and viewing zone angle. A 3D image generated by this technique can be easily seen by both eyes. In this study, we measured the accommodation responses to a 3D image generated by the horizontally scanning holographic display technique that has a horizontal viewing zone angle of 14.6° and screen size of 4.3 in. We found that the accommodation responses to a 3D image displayed within 400 mm from the display screen were similar to those of a real object.
NASA Astrophysics Data System (ADS)
Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao
2014-09-01
Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.
NASA Astrophysics Data System (ADS)
Siddique, Radwanul Hasan; Gomard, Guillaume; Hölscher, Hendrik
2015-04-01
The glasswing butterfly (Greta oto) has, as its name suggests, transparent wings with remarkable low haze and reflectance over the whole visible spectral range even for large view angles of 80°. This omnidirectional anti-reflection behaviour is caused by small nanopillars covering the transparent regions of its wings. In difference to other anti-reflection coatings found in nature, these pillars are irregularly arranged and feature a random height and width distribution. Here we simulate the optical properties with the effective medium theory and transfer matrix method and show that the random height distribution of pillars significantly reduces the reflection not only for normal incidence but also for high view angles.
Does hemipelvis structure and position influence acetabulum orientation?
Musielak, Bartosz; Jóźwiak, Marek; Rychlik, Michał; Chen, Brian Po-Jung; Idzior, Maciej; Grzegorzewski, Andrzej
2016-03-16
Although acetabulum orientation is well established anatomically and radiographically, its relation to the innominate bone has rarely been addressed. If explored, it could open the discussion on patomechanisms of such complex disorders as femoroacetabular impingement (FAI). We therefore evaluated the influence of pelvic bone position and structure on acetabular spatial orientation. We describe this relation and its clinical implications. This retrospective study was based on computed tomography scanning of three-dimensional models of 31 consecutive male pelvises (62 acetabulums). All measurements were based on CT spatial reconstruction with the use of highly specialized software (Rhinoceros). Relations between acetabular orientation (inclination, tilt, anteversion angles) and pelvic structure were evaluated. The following parameters were evaluated to assess the pelvic structure: iliac opening angle, iliac tilt angle, interspinous distance (ISD), intertuberous distance (ITD), height of the pelvis (HP), and the ISD/ITD/HP ratio. The linear and nonlinear dependence of the acetabular angles and hemipelvic measurements were examined with Pearson's product - moment correlation and Spearman's rank correlation coefficient. Correlations different from 0 with p < 0.05 were considered statistically significant. Comparison of the axis position with pelvis structure with orientation in the horizontal plane revealed a significant positive correlation between the acetabular anteversion angle and the iliac opening angle (p = 0.041 and 0.008, respectively). In the frontal plane, there was a positive correlation between the acetabular inclination angle and the iliac tilt angle (p = 0.025 and 0.014, respectively) and the acetabular inclination angle and the ISD/ITD/HP ratio (both p = 0.048). There is a significant correlation of the hemipelvic structure and acetabular orientation under anatomic conditions, especially in the frontal and horizontal planes. In the anteroposterior view, the more tilted-down innominate bone causes a more caudally oriented acetabulum axis, whereas in the horizontal view this relation is reversed. This study may serve as a basis for the discussion on the role of the pelvis in common disorders of the hip.
MISR Scans the Texas-Oklahoma Border
NASA Technical Reports Server (NTRS)
2000-01-01
These MISR images of Oklahoma and north Texas were acquired on March 12, 2000 during Terra orbit 1243. The three images on the left, from top to bottom, are from the 70-degree forward viewing camera, the vertical-viewing (nadir) camera, and the 70-degree aftward viewing camera. The higher brightness, bluer tinge, and reduced contrast of the oblique views result primarily from scattering of sunlight in the Earth's atmosphere, though some color and brightness variations are also due to differences in surface reflection at the different angles. The longer slant path through the atmosphere at the oblique angles also accentuates the appearance of thin, high-altitude cirrus clouds.On the right, two areas from the nadir camera image are shown in more detail, along with notations highlighting major geographic features. The south bank of the Red River marks the boundary between Texas and Oklahoma. Traversing brush-covered and grassy plains, rolling hills, and prairies, the Red River and the Canadian River are important resources for farming, ranching, public drinking water, hydroelectric power, and recreation. Both originate in New Mexico and flow eastward, their waters eventually discharging into the Mississippi River.A smoke plume to the north of the Ouachita Mountains and east of Lake Eufaula is visible in the detailed nadir imagery. The plume is also very obvious at the 70-degree forward view angle, to the right of center and about one-fourth of the way down from the top of the image.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Response Versus Scan-Angle Corrections for MODIS Reflective Solar Bands Using Deep Convective Clouds
NASA Technical Reports Server (NTRS)
Bhatt, Rajendra; Angal, Amit; Doelling, David R.; Xiong, Xiaoxiong; Wu, Aisheng; Haney, Conor O.; Scarino, Benjamin R.; Gopalan, Arun
2016-01-01
The absolute radiometric calibration of the reflective solar bands (RSBs) of Aqua- and Terra-MODIS is performed using on-board calibrators. A solar diffuser (SD) panel along with a solar diffuser stability monitor (SDSM) system, which tracks the performance of the SD over time, provides the absolute reference for calibrating the MODIS sensors. MODIS also views the moon and deep space through its space view (SV) port for lunar-based calibration and computing the zero input radiance, respectively. The MODIS instrument views the Earths surface through a two-sided scan mirror, whose reflectance is a function of angle of incidence (AOI) and is described by response versus scan-angle (RVS). The RVS for both MODIS instruments was characterized prior to launch. MODIS also views the SD and the moon at two different assigned RVS positions. There is sufficient evidence that the RVS is changing on orbit over time and as a function of wavelength. The SD and lunar observation scans can only track the RVS variation at two RVS positions. Consequently, the MODIS Characterization Support Team (MCST) developed enhanced approaches that supplement the onboard calibrator measurements with responses from pseudo-invariant desert sites. This approach has been implemented in Level 1B (L1B) Collection 6 (C6) for selected short-wavelength bands. This paper presents an alternative approach of characterizing the mirror RVS to derive the time-dependent RVS correction factors for MODIS RSBs using tropical deep convective cloud (DCC) targets. An initial assessment of the DCC response from Aqua-MODIS band 1 C6 data indicates evidence of RVS artifacts, which are not uniform across the scans and are more prevalent in the left side Earth-view scans.
Response Versus Scan-Angle Corrections for MODIS Reflective Solar Bands Using Deep Convective Clouds
NASA Technical Reports Server (NTRS)
Bhatt, Rajendra; Angal, Amit; Doelling, David R.; Xiong, Xiaoxiong; Wu, Aisheng; Haney, Conor O.; Scarino, Benjamin R.; Gopalan, Arun
2016-01-01
The absolute radiometric calibration of the reflective solar bands (RSBs) of Aqua- and Terra-MODIS is performed using on-board calibrators. A solar diffuser (SD) panel along with a solar diffuser stability monitor (SDSM) system, which tracks the performance of the SD over time, provides the absolute reference for calibrating the MODIS sensors. MODIS also views the moon and deep space through its space view (SV) port for lunar-based calibration and computing the zero input radiance, respectively. The MODIS instrument views the Earth's surface through a two-sided scan mirror, whose reflectance is a function of angle of incidence (AOI) and is described by response versus scan-angle (RVS). The RVS for both MODIS instruments was characterized prior to launch. MODIS also views the SD and the moon at two different assigned RVS positions. There is sufficient evidence that the RVS is changing on orbit over time and as a function of wavelength. The SD and lunar observation scans can only track the RVS variation at two RVS positions. Consequently, the MODIS Characterization Support Team (MCST) developed enhanced approaches that supplement the onboard calibrator measurements with responses from pseudo-invariant desert sites. This approach has been implemented in Level 1B (L1B) Collection 6 (C6) for selected short-wavelength bands. This paper presents an alternative approach of characterizing the mirror RVS to derive the time-dependent RVS correction factors for MODIS RSBs using tropical deep convective cloud (DCC) targets. An initial assessment of the DCC response from Aqua-MODIS band 1 C6 data indicates evidence of RVS artifacts, which are not uniform across the scans and are more prevalent in the left side Earth-view scans.
Response versus scan-angle corrections for MODIS reflective solar bands using deep convective clouds
NASA Astrophysics Data System (ADS)
Bhatt, Rajendra; Angal, Amit; Doelling, David R.; Xiong, Xiaoxiong; Wu, Aisheng; Haney, Conor O.; Scarino, Benjamin R.; Gopalan, Arun
2016-05-01
The absolute radiometric calibration of the reflective solar bands (RSBs) of Aqua- and Terra-MODIS is performed using on-board calibrators. A solar diffuser (SD) panel along with a solar diffuser stability monitor (SDSM) system, which tracks the degradation of the SD over time, provides the baseline for calibrating the MODIS sensors. MODIS also views the moon and deep space through its space view (SV) port for lunar-based calibration and computing the background, respectively. The MODIS instrument views the Earth's surface using a two-sided scan mirror, whose reflectance is a function of the angle of incidence (AOI) and is described by response versus scan-angle (RVS). The RVS for both MODIS instruments was characterized prior to launch. MODIS also views the SD and the moon at two different AOIs. There is sufficient evidence that the RVS is changing on orbit over time and as a function of wavelength. The SD and lunar observation scans can only track the RVS variation at two AOIs. Consequently, the MODIS Characterization Support Team (MCST) developed enhanced approaches that supplement the onboard calibrator measurements with responses from the pseudo-invariant desert sites. This approach has been implemented in Level 1B (L1B) Collection 6 (C6) for select short-wavelength bands. This paper presents an alternative approach of characterizing the mirror RVS to derive the time-dependent RVS correction factors for MODIS RSBs using tropical deep convective cloud (DCC) targets. An initial assessment of the DCC response from Aqua-MODIS band 1 C6 data indicates evidence of RVS artifacts, which are not uniform across the scans and are more prevalent at the beginning of the earth-view scan.
NASA Astrophysics Data System (ADS)
Trolinger, James D.; Dioumaev, Andrei K.; Ziaee, Ali; Minniti, Marco; Dunn-Rankin, Derek
2017-08-01
This paper describes research that demonstrated gated, femtosecond, digital holography, enabling 3D microscopic viewing inside dense, almost opaque sprays, and providing a new and powerful diagnostics capability for viewing fuel atomization processes never seen before. The method works by exploiting the extremely short coherence and pulse length (approximately 30 micrometers in this implementation) provided by a femtosecond laser combined with digital holography to eliminate multiple and wide angle scattered light from particles surrounding the injection region, which normally obscures the image of interest. Photons that follow a path that differs in length by more than 30 micrometers from a straight path through the field to the sensor do not contribute to the holographic recording of photons that travel in a near straight path (ballistic and "snake" photons). To further enhance the method, off-axis digital holography was incorporated to enhance signal to noise ratio and image processing capability in reconstructed images by separating the conjugate images, which overlap and interfere in conventional in-line holography. This also enables digital holographic interferometry. Fundamental relationships and limitations were also examined. The project is a continuing collaboration between MetroLaser and the University of California, Irvine.
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2011 CFR
2011-07-01
... where the exemption in paragraph (c)(2) of this section applies. (9) Motor vehicle—A self-propelled.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... of the human eye—Reference 4.1 of section 4.) c. Angle of view 15 degrees maximum total angle d...
40 CFR 52.128 - Rule for unpaved parking lots, unpaved roads and vacant lots.
Code of Federal Regulations, 2010 CFR
2010-07-01
... where the exemption in paragraph (c)(2) of this section applies. (9) Motor vehicle—A self-propelled.... Research Triangle Park, N.C. May 1982. 3. “Method 9—Visible Determination of the Opacity of Emissions from... of the human eye—Reference 4.1 of section 4.) c. Angle of view 15 degrees maximum total angle d...
NASA Astrophysics Data System (ADS)
Fartookzadeh, M.; Mohseni Armaki, S. H.
2016-10-01
A new kind of dual-band reflection-mode circular polarizers (RMCPs) is introduced with wide bandwidth and wide-view at the operating frequencies. The proposed RMCPs are based on dual-layer rectangular patches on both sides of a substrate, separated by a foam or air layer from the ground plane. Required TE susceptance of the first layer patches to produce circular polarization is calculated using the equivalent transmission line model. Dimensions of the RMCP are obtained using parametrical study for the two frequency bands, 1.9-2.3 GHz and 7.9-8.3 GHz. In addition, it is indicated that the accepted view angle and bandwidth of the proposed dual-layer RMCP are improved compared with the single layer RMCP, significantly. Moreover, a tradeoff is observed for the dual-layer RMCP on the bandwidths of X band and S band that can be controlled by propagation angle of the incident wave. The proposed RMCP has 30.5 % and 33.7 % bandwidths for less than 3 dB axial ratio with incident angles {\\theta}max=50{\\deg} and {\\theta}min=35{\\deg}. Finally, simulation results are met by the measurement for three angles of the incident wave.
OPTIMUM PHYSICAL VIEWING CONDITIONS FOR A REAR PROJECTION DAYLIGHT SCREEN.
ERIC Educational Resources Information Center
ASH, PHILIP; JASPEN, NATHAN
AN EXPERIMENT DESIGNED TO DISCOVER WHETHER THERE WERE DIFFERENCES IN LEARNING WHICH COULD BE ATTRIBUTED TO DIFFERENCES IN ROOM ILLUMINATION, VIEWING ANGLE, AND DISTANCE FROM THE SCREEN AS THEY RELATED TO THE CABINET-TYPE PROJECTOR WAS PRESENTED. PARTICIPANTS WERE 721 TRAINEES AT THE GREAT LAKES NAVAL TRAINING STATION. THE TASK CHOSEN WAS THE…
2011-07-12
ISS028-E-016246 (12 July 2011) --- This is a high angle view showing the Cupola, backdropped against a solar array panel, on the International Space Station. In some of the images in this series, faces of several of the Atlantis STS-135 and Expedition 28 crew members can be seen in the Cupola's windows.
Digital dissection system for medical school anatomy training
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Pawlina, Wojciech; Carmichael, Stephen W.; Korinek, Mark J.; Schroeder, Kathryn K.; Segovis, Colin M.; Robb, Richard A.
2003-05-01
As technology advances, new and innovative ways of viewing and visualizing the human body are developed. Medicine has benefited greatly from imaging modalities that provide ways for us to visualize anatomy that cannot be seen without invasive procedures. As long as medical procedures include invasive operations, students of anatomy will benefit from the cadaveric dissection experience. Teaching proper technique for dissection of human cadavers is a challenging task for anatomy educators. Traditional methods, which have not changed significantly for centuries, include the use of textbooks and pictures to show students what a particular dissection specimen should look like. The ability to properly carry out such highly visual and interactive procedures is significantly constrained by these methods. The student receives a single view and has no idea how the procedure was carried out. The Department of Anatomy at Mayo Medical School recently built a new, state-of-the-art teaching laboratory, including data ports and power sources above each dissection table. This feature allows students to access the Mayo intranet from a computer mounted on each table. The vision of the Department of Anatomy is to replace all paper-based resources in the laboratory (dissection manuals, anatomic atlases, etc.) with a more dynamic medium that will direct students in dissection and in learning human anatomy. Part of that vision includes the use of interactive 3-D visualization technology. The Biomedical Imaging Resource (BIR) at Mayo Clinic has developed, in collaboration with the Department of Anatomy, a system for the control and capture of high resolution digital photographic sequences which can be used to create 3-D interactive visualizations of specimen dissections. The primary components of the system include a Kodak DC290 digital camera, a motorized controller rig from Kaidan, a PC, and custom software to synchronize and control the components. For each dissection procedure, the images are captured automatically, and then processed to generate a Quicktime VR sequence, which permits users to view an object from multiple angles by rotating it on the screen. This provides 3-D visualizations of anatomy for students without the need for special '3-D glasses' that would be impractical to use in a laboratory setting. In addition, a digital video camera may be mounted on the rig for capturing video recordings of selected dissection procedures being carried out by expert anatomists for playback by the students. Anatomists from the Department of Anatomy at Mayo have captured several sets of dissection sequences and processed them into Quicktime VR sequences. The students are able to look at these specimens from multiple angles using this VR technology. In addition, the student may zoom in to obtain high-resolution close-up views of the specimen. They may interactively view the specimen at varying stages of dissection, providing a way to quickly and intuitively navigate through the layers of tissue. Electronic media has begun to impact all areas of education, but a 3-D interactive visualization of specimen dissections in the laboratory environment is a unique and powerful means of teaching anatomy. When fully implemented, anatomy education will be enhanced significantly by comparison to traditional methods.
2017-05-30
Before NASA's Cassini entered its Grand Finale orbits, it acquired unprecedented views of the outer edges of the main ring system. For example, this close-up view of the Keeler Gap, which is near the outer edge of Saturn's main rings, shows in great detail just how much the moon Daphnis affects the edges of the gap. Daphnis creates waves in the edges of the gap through its gravitational influence. Some clumping of ring particles can be seen in the perturbed edge, similar to what was seen on the edges of the Encke Gap back when Cassini arrived at Saturn in 2004. This view looks toward the sunlit side of the rings from about 3 degrees above the ring plane. The view was acquired at a distance of approximately 18,000 miles (30,000 kilometers) from Daphnis and at a Sun-Daphnis-spacecraft, or phase, angle of 69 degrees. Image scale is 581 feet (177 meters) per pixel. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Jan. 16, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21329
The Impacts of Bowtie Effect and View Angle Discontinuity on MODIS Swath Data Gridding
NASA Technical Reports Server (NTRS)
Wang, Yujie; Lyapustin, Alexei
2007-01-01
We have analyzed two effects of the MODIS viewing geometry on the quality of gridded imagery. First, the fact that the MODIS scans a swath of the Earth 10 km wide at nadir, causes abrupt change of the view azimuth angle at the boundary of adjacent scans. This discontinuity appears as striping of the image clearly visible in certain cases with viewing geometry close to principle plane over the snow of the glint area of water. The striping is a true surface Bi-directional Reflectance Factor (BRF) effect and should be preserved during gridding. Second, due to bowtie effect, the observations in adjacent scans overlap each other. Commonly used method of calculating grid cell value by averaging all overlapping observations may result in smearing of the image. This paper describes a refined gridding algorithm that takes the above two effects into account. By calculating the grid cell value by averaging the overlapping observations from a single scan, the new algorithm preserves the measured BRF signal and enhances sharpness of the image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohmi, K.
In recent high luminosity colliders, the finite crossing angle scheme becomes popular to gain the multiplicity of luminosity with multi-bunch or long bunch operation. Success of KEKB showed that the finite crossing angle scheme was no problem to achieve the beam-beam parameter up to 0.05. The authors have studied the beam-beam interactions with/without crossing angle toward higher luminosity. They discuss how the crossing angle affects the beam-beam parameter and luminosity in the present KEK B factory (KEKB) using computer simulations.
Walker; Westneat
1997-01-01
Labriform, or pectoral fin, propulsion is the primary swimming mode for many fishes, even at high relative speeds. Although kinematic data are critical for evaluating hydrodynamic models of propulsion, these data are largely lacking for labriform swimmers, especially for species that employ an exclusively labriform mode across a broad range of speeds. We present data on pectoral fin locomotion in Gomphosus varius (Labridae), a tropical coral reef fish that uses a lift-based mechanism to fly under water at sustained speeds of 16 total body lengths s-1 (TL s-1). Lateral- and dorsal-view video images of three fish swimming in a flow tank at 14 TL s-1 were recorded at 60 Hz. From the two views, we reconstructed the three-dimensional motion of the center of mass, the fin tip and two fin chords for multiple fin beats of each fish at each of four speeds. In G. varius, the fin oscillates largely up and down: the stroke plane is tilted by approximately 20 ° from the vertical. Both frequency and the area swept by the pectoral fins increase with swimming speed. Interestingly, there are individual differences in how this area increases. Relative to the fish, the fin tip in lateral view moves along the path of a thin, inclined figure-of-eight. Relative to a stationary observer, the fin tip traces a sawtooth pattern, but the teeth are recumbent (indicating net backwards movement) only at the slowest speeds. Distal fin chords pitch nose downward during the downstroke and nose upward during the upstroke. Hydrodynamic angles of attack are largely positive during the downstroke and negative during the upstroke. The geometry of the fin and incident flow suggests that the fin is generating lift with large upward and small forward components during the downstroke. The negative incident angles during the upstroke suggest that the fin is generating largely thrust during the upstroke. In general, the large thrust is combined with a downward force during the upstroke, but the net backwards motion of the fin at slow speeds generates a small upward component during slow swimming. Both the alternating sign of the hydrodynamic angle of attack and the observed reduced frequencies suggest that unsteady effects are important in G. varius aquatic flight, especially at low speeds. This study provides a framework for the comparison of aquatic flight by fishes with aerial flight by birds, bats and insects.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections.
Zhang, You; Yin, Fang-Fang; Segars, W Paul; Ren, Lei
2013-12-01
To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy. Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and "ground-truth" onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)∕COMS (±S.D.) between lesions in prior images and "ground-truth" onboard images were 136.11% (±42.76%)∕15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD∕COMS between the lesion in estimated and "ground-truth" onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)∕4.9 mm (±3.0 mm), 96.07% (±31.48%)∕12.1 mm (±3.9 mm) and 11.45% (±9.37%)∕1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)∕4.9 mm (±3.0 mm), 75.98% (±27.21%)∕9.9 mm (±4.0 mm), and 5.22% (±2.12%)∕0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)∕3.2 mm (±2.2 mm), 24.57% (±18.18%)∕2.9 mm (±2.0 mm), and 10.48% (±9.50%)∕1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD∕COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased. The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, You; Yin, Fang-Fang; Ren, Lei
2013-12-15
Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes tomore » the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion in estimated and “ground-truth” onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)/4.9 mm (±3.0 mm), 96.07% (±31.48%)/12.1 mm (±3.9 mm) and 11.45% (±9.37%)/1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)/4.9 mm (±3.0 mm), 75.98% (±27.21%)/9.9 mm (±4.0 mm), and 5.22% (±2.12%)/0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)/3.2 mm (±2.2 mm), 24.57% (±18.18%)/2.9 mm (±2.0 mm), and 10.48% (±9.50%)/1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD/COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased.Conclusions: The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.« less
A novel screen design for anti-ambient light front projection display with angle-selective absorber
NASA Astrophysics Data System (ADS)
Liao, Tianju; Chen, Weigang; He, Kebo; Zhang, Zhaoyu
2016-03-01
Ambient light is destructive to the reflective type projection system's contrast ratio which has great influence on the image quality. In contrast to the conventional front projection, short-throw projection has its advantage to reject the ambient light. Fresnel lens-shaped reflection layer is adapted to direct light from a large angle due to the low lens throw ratio to the viewing area. The structure separates the path of the ambient light and projection light, creating the chance to solve the problem that ambient light is mixed with projection light. However, with solely the lens-shaped reflection layer is not good enough to improve the contrast ratio due to the scattering layer, which contributes a necessarily wide viewing angle, could interfere with both light paths before hitting the layer. So we propose a new design that sets the draft angle surface with absorption layer and adds an angle-selective absorber to separate these two kinds of light. The absorber is designed to fit the direction of the projection light, leading to a small absorption cross section for the projection light and respectfully big absorption cross section for the ambient light. We have calculated the design with Tracepro, a ray tracing program and find a nearly 8 times contrast ratio improvement against the current design in theory. This design can hopefully provide efficient display in bright lit situation with better viewer satisfaction.
Multi-angle lensless digital holography for depth resolved imaging on a chip.
Su, Ting-Wei; Isikman, Serhan O; Bishara, Waheb; Tseng, Derek; Erlinger, Anthony; Ozcan, Aydogan
2010-04-26
A multi-angle lensfree holographic imaging platform that can accurately characterize both the axial and lateral positions of cells located within multi-layered micro-channels is introduced. In this platform, lensfree digital holograms of the micro-objects on the chip are recorded at different illumination angles using partially coherent illumination. These digital holograms start to shift laterally on the sensor plane as the illumination angle of the source is tilted. Since the exact amount of this lateral shift of each object hologram can be calculated with an accuracy that beats the diffraction limit of light, the height of each cell from the substrate can be determined over a large field of view without the use of any lenses. We demonstrate the proof of concept of this multi-angle lensless imaging platform by using light emitting diodes to characterize various sized microparticles located on a chip with sub-micron axial and lateral localization over approximately 60 mm(2) field of view. Furthermore, we successfully apply this lensless imaging approach to simultaneously characterize blood samples located at multi-layered micro-channels in terms of the counts, individual thicknesses and the volumes of the cells at each layer. Because this platform does not require any lenses, lasers or other bulky optical/mechanical components, it provides a compact and high-throughput alternative to conventional approaches for cytometry and diagnostics applications involving lab on a chip systems.
Hip morphologic measurements in an Egyptian population.
Aly, Tarek A
2011-04-11
The study of acetabular morphology has shown that there are geographic differences in the morphology and prevalence of acetabular dysplasia among different ethnic groups. However, few data exist on the shape of the acetabulum in various populations around the world. In this study, we examined samples of pelvic radiographs from Egyptian adults. Acetabular dysplasia in adults is characterized by a shallow and relatively vertical acetabulum.The aim of this study was to examine acetabular morphology to determine the prevalence of hip dysplasia in adult Egyptians. This included 244 adults, 134 men and 110 women between 18 and 60 years, who were used to measure center edge angle, acetabular Sharp angle, acetabular head index on anteroposterior radiographic views of the hip joints, and vertical center anterior margin angle on false profile views. The radiographs were taken of patients with no hip complaints at Tanta University Hospital.The results were statistically studied according to the age, height, and weight of patients. The prevalence of acetabular dysplasia was 2.25% for Egyptian men and 3.6% for women with respect to center edge angles, vertical center anterior margin angle, and acetabular head index.We concluded that gender variations in the morphology of the acetabulum and sex influences geometrical measurements of the acetabulum. Egyptian women were more dysplastic than men using the 4 parameters of hip measurements. There are also racial variations in hip morphology. Copyright 2011, SLACK Incorporated.
Comparative study of fat-suppression techniques for hip arthroplasty MR imaging.
Molière, Sébastien; Dillenseger, Jean-Philippe; Ehlinger, Matthieu; Kremer, Stéphane; Bierry, Guillaume
2017-09-01
The goal of this study was to evaluate different fat-suppressed fluid-sensitive sequences in association with different metal artifacts reduction techniques (MARS) to determine which combination allows better fat suppression around metallic hip implants. An experimental study using an MRI fat-water phantom quantitatively evaluated contrast shift induced by metallic hip implant for different fat-suppression techniques and MARS. Then a clinical study with patients addressed to MRI unit for painful hip prosthesis compared these techniques in terms of fat suppression quality and diagnosis confidence. Among sequences without MARS, both T2 Dixon and short tau inversion recuperation (STIR) had significantly lower contrast shift (p < 0.05), Dixon offering the best fat suppression. Adding MARS (view-angle tilting or slice-encoding for metal artifact correction (SEMAC)) to STIR gave better results than Dixon alone, and also better than SPAIR and fat saturation with MARS (p < 0.05). There were no statistically significant differences between STIR with view-angle tilting and STIR with SEMAC in terms of fat suppression quality. STIR sequence is the preferred fluid-sensitive MR sequence in patients with metal implant. In combination with MARS (view-angle tilting or SEMAC), STIR appears to be the best option for high-quality fat suppression.
A GRB and Broad-lined Type Ic Supernova from a Single Central Engine
NASA Astrophysics Data System (ADS)
Barnes, Jennifer; Duffell, Paul C.; Liu, Yuqian; Modjaz, Maryam; Bianco, Federica B.; Kasen, Daniel; MacFadyen, Andrew I.
2018-06-01
Unusually high velocities (≳0.1c) and correspondingly high kinetic energies have been observed in a subset of Type Ic supernovae (so-called “broad-lined Ic” supernovae; SNe Ic-BL), prompting a search for a central engine model capable of generating such energetic explosions. A clue to the explosion mechanism may lie in the fact that all supernovae that accompany long-duration gamma-ray bursts (GRBs) belong to the SN Ic-BL class. Using a combination of two-dimensional relativistic hydrodynamics and radiation transport calculations, we demonstrate that the central engine responsible for long GRBs can also trigger an SN Ic-BL. We find that a reasonable GRB engine injected into a stripped Wolf–Rayet progenitor produces a relativistic jet with energy ∼1051 erg, as well as an SN whose synthetic light curves and spectra are fully consistent with observed SNe Ic-BL during the photospheric phase. As a result of the jet’s asymmetric energy injection, the SN spectra and light curves depend on viewing angle. The impact of viewing angle on the spectrum is particularly pronounced at early times, while the viewing-angle dependence for the light curves (∼10% variation in bolometric luminosity) persists throughout the photospheric phase.
Xie, Zuo-ping; Zhao, Bo-wen; Yuan, Hua; Hua, Qi-qi; Jin, She-hong; Shen, Xiao-yan; Han, Xin-hong; Zhou, Jia-mei; Fang, Min; Chen, Jin-hong
2013-01-01
Background: To establish the reference range of the angle between ascending aorta and main pulmonary artery of fetus in the second and third trimester using spatiotemporal image correlation (STIC), and to investigate the value of this angle in prenatal screening of conotruncal defects (CTDs). Materials and Methods: Volume images of 311 normal fetuses along with 20 fetuses with congenital heart diseases were recruited in this cross-sectional study. An offline analysis of acquired volume datasets was carried out with multiplanar mode. The angle between aorta and pulmonary artery was measured by navigating the pivot point and rotating axes and the reference range was established. The images of ascending aorta and main pulmonary artery in fetuses with congenital heart diseases were observed by rotating the axes within the normal angle reference range. Results: The angle between ascending aorta and main pulmonary artery of the normal fetus (range: 59.1˚~97.0˚, mean ± SD: 78.0˚ ± 9.7˚) was negatively correlated with gestational age (r = -0.52; p<0.01). By rotating the normal angle range corresponding to gestational age, the fetuses with CTD could not display views of their left ventricular long axis and main pulmonary trunk correctly. Conclusion: The left ventricular long axis and main pulmonary trunk views can be displayed using STIC so that the echocardiographic protocol of the cardiovascular joint could be standardized. The reference range of the angle between ascending aorta and main pulmonary artery is clinically useful in prenatal screening of CTD and provides a reliable quantitative standard to estimate the spatial relationship of the large arteries of fetus. PMID:24520485
NASA Technical Reports Server (NTRS)
Wu, Aisheng; Xiong, Xiaoxiong; Chiang, Kwofu
2017-01-01
The visible infrared imaging radiometer suite (VIIRS) is a key sensor carried on the Suomi national polar-orbiting partnership (S-NPP) satellite, which was launched in October 2011. It has several on-board calibration components, including a solar diffuser and a solar diffuser stability monitor for the reflective solar bands, a V-groove blackbody for the thermal emissive bands (TEB), and a space view port for background subtraction. These on-board calibrators are located at fixed scan angles. The VIIRS response versus scan angle (RVS) was characterized prelaunch in lab ambient conditions and is currently used to characterize the on-orbit response for all scan angles relative to the calibrator scan angle. Since the RVS is vitally important to the quality of calibrated radiance products, several independent studies were performed to analyze the prelaunch RVS measurement data. A spacecraft level pitch maneuver was scheduled during the first 3 months of intensive Cal/Val. The S-NPP pitch maneuver provided a rare opportunity for VIIRS to make observations of deep space over the entire range of Earth view scan angles, which can be used to characterize the TEB RVS. This study provides our analysis of the pitch maneuver data and assessment of the derived TEB RVS by comparison with prelaunch results. In addition, the stability of the RVS after the first 5 years of operation is examined using observed brightness temperatures (BT) over a clear ocean at various angles of incidence (AOI). To reduce the impact of variations in the BT measurements, the daily overpasses collected over the ocean are screened for cloud contamination, normalized to the results obtained at the blackbody AOI, and averaged each year.
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
2015-08-03
Thanks to the illumination angle, Mimas (right) and Dione (left) appear to be staring up at a giant Saturn looming in the background. Although certainly large enough to be noticeable, moons like Mimas (246 miles or 396 kilometers across) and Dione (698 miles or 1123 kilometers across) are tiny compared to Saturn (75,400 miles or 120,700 kilometers across). Even the enormous moon Titan (3,200 miles or 5,150 kilometers across) is dwarfed by the giant planet. This view looks toward the unilluminated side of the rings from about one degree of the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on May 27, 2015 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 728 nanometers. The view was obtained at a distance of approximately 634,000 miles (one million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 85 degrees. Image scale is 38 miles (61 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18331
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
NASA Astrophysics Data System (ADS)
Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun
2016-04-01
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
An experimental investigation of vortex breakdown on a delta wing
NASA Technical Reports Server (NTRS)
Payne, F. M.; Nelson, R. C.
1986-01-01
An experimental investigation of vortex breakdown on delta wings at high angles is presented. Thin delta wings having sweep angles of 70, 75, 80 and 85 degrees are being studied. Smoke flow visualization and the laser light sheet technique are being used to obtain cross-sectional views of the leading edge vortices as they break down. At low tunnel speeds (as low as 3 m/s) details of the flow, which are usually imperceptible or blurred at higher speeds, can be clearly seen. A combination of lateral and longitudinal cross-sectional views provides information on the three dimensional nature of the vortex structure before, during and after breakdown. Whereas details of the flow are identified in still photographs, the dynamic characteristics of the breakdown process were recorded using high speed movies. Velocity measurements were obtained using a laser Doppler anemometer with the 70 degree delta wing at 30 degrees angle of attack. The measurements show that when breakdown occurs the core flow transforms from a jet-like flow to a wake-like flow.
Dependence of the Peak Fluxes of Solar Energetic Particles on CME 3D Parameters from STEREO and SOHO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jinhye; Moon, Y.-J.; Lee, Harim, E-mail: jinhye@khu.ac.kr
We investigate the relationships between the peak fluxes of 18 solar energetic particle (SEP) events and associated coronal mass ejection (CME) 3D parameters (speed, angular width, and separation angle) obtained from SOHO , and STEREO-A / B for the period from 2010 August to 2013 June. We apply the STEREO CME Analysis Tool (StereoCAT) to the SEP-associated CMEs to obtain 3D speeds and 3D angular widths. The separation angles are determined as the longitudinal angles between flaring regions and magnetic footpoints of the spacecraft, which are calculated by the assumption of a Parker spiral field. The main results are asmore » follows. (1) We find that the dependence of the SEP peak fluxes on CME 3D speed from multiple spacecraft is similar to that on CME 2D speed. (2) There is a positive correlation between SEP peak flux and 3D angular width from multiple spacecraft, which is much more evident than the relationship between SEP peak flux and 2D angular width. (3) There is a noticeable anti-correlation ( r = −0.62) between SEP peak flux and separation angle. (4) The multiple-regression method between SEP peak fluxes and CME 3D parameters shows that the longitudinal separation angle is the most important parameter, and the CME 3D speed is secondary on SEP peak flux.« less
Agile wide-angle beam steering with electrowetting microprisms
NASA Astrophysics Data System (ADS)
Smith, Neil R.; Abeysinghe, Don C.; Haus, Joseph W.; Heikenfeld, Jason
2006-07-01
A novel basis for beam steering with electrowetting microprisms (EMPs) is reported. EMPs utilize electrowetting modulation of liquid contact angle in order to mimic the refractive behavior for various classical prism geometries. Continuous beam steering through an angle of 14° (±7°) has been demonstrated with a liquid index of n=1.359. Experimental results are well-matched to theoretical behavior up to the point of electrowetting contact-angle saturation. Projections show that use of higher index liquids (n~1.6) will result in steering through ~30° (±15°). Fundamental factors defining achievable deflection range, and issues for Ladar use, are reviewed. This approach is capable of good switching speed (~ms), polarization independent operation, modulation of beam field-of-view (lensing), and high steering efficiency that is independent of deflection angle.
New three-dimensional visualization system based on angular image differentiation
NASA Astrophysics Data System (ADS)
Montes, Juan D.; Campoy, Pascual
1995-03-01
This paper presents a new auto-stereoscopic system capable of reproducing static or moving 3D images by projection with horizontal parallax or with horizontal and vertical parallaxes. The working principle is based on the angular differentiation of the images which are projected onto the back side of the new patented screen. The most important features of this new system are: (1) Images can be seen by naked eye, without the use of glasses or any other aid. (2) The 3D view angle is not restricted by the angle of the optics making up the screen. (3) Fine tuning is not necessary, independently of the parallax and of the size of the 3D view angle. (4) Coherent light is not necessary neither in capturing the image nor in its reproduction, but standard cameras and projectors. (5) Since the images are projected, the size and depth of the reproduced scene is unrestricted. (6) Manufacturing cost is not excessive, due to the use of optics of large focal length, to the lack of fine tuning and to the use of the same screen several reproduction systems. (7) This technology can be used for any projection system: slides, movies, TV cannons,... A first prototype of static images has been developed and tested with a 3D view angle of 90 degree(s) and a photographic resolution over a planar screen of 900 mm, of diagonal length. Present developments have success on a dramatic size reduction of the projecting system and of its cost. Simultaneous tasks have been carried out on the development of a prototype of 3D moving images.
Interactive stereo electron microscopy enhanced with virtual reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E.Wes; Bastacky, S.Jacob; Schwartz, Kenneth S.
2001-12-17
An analytical system is presented that is used to take measurements of objects perceived in stereo image pairs obtained from a scanning electron microscope (SEM). Our system operates by presenting a single stereo view that contains stereo image data obtained from the SEM, along with geometric representations of two types of virtual measurement instruments, a ''protractor'' and a ''caliper''. The measurements obtained from this system are an integral part of a medical study evaluating surfactant, a liquid coating the inner surface of the lung which makes possible the process of breathing. Measurements of the curvature and contact angle of submicronmore » diameter droplets of a fluorocarbon deposited on the surface of airways are performed in order to determine surface tension of the air/liquid interface. This approach has been extended to a microscopic level from the techniques of traditional surface science by measuring submicrometer rather than millimeter diameter droplets, as well as the lengths and curvature of cilia responsible for movement of the surfactant, the airway's protective liquid blanket. An earlier implementation of this approach for taking angle measurements from objects perceived in stereo image pairs using a virtual protractor is extended in this paper to include distance measurements and to use a unified view model. The system is built around a unified view model that is derived from microscope-specific parameters, such as focal length, visible area and magnification. The unified view model ensures that the underlying view models and resultant binocular parallax cues are consistent between synthetic and acquired imagery. When the view models are consistent, it is possible to take measurements of features that are not constrained to lie within the projection plane. The system is first calibrated using non-clinical data of known size and resolution. Using the SEM, stereo image pairs of grids and spheres of known resolution are created to calibrate the measurement system. After calibration, the system is used to take distance and angle measurements of clinical specimens.« less
Snowstorm Along the China-Mongolia-Russia Borders
NASA Technical Reports Server (NTRS)
2004-01-01
Heavy snowfall on March 12, 2004, across north China's Inner Mongolia Autonomous Region, Mongolia and Russia, caused train and highway traffic to stop for several days along the Russia-China border. This pair of images from the Multi-angle Imaging SpectroRadiometer (MISR) highlights the snow and surface properties across the region on March 13. The left-hand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The right-hand image is a multi-angle false-color view made from the red band data of the 46-degree aftward camera, the nadir camera, and the 46-degree forward camera. About midway between the frozen expanse of China's Hulun Nur Lake (along the right-hand edge of the images) and Russia's Torey Lakes (above image center) is a dark linear feature that corresponds with the China-Mongolia border. In the upper portion of the images, many small plumes of black smoke rise from coal and wood fires and blow toward the southeast over the frozen lakes and snow-covered grasslands. Along the upper left-hand portion of the images, in Russia's Yablonovyy mountain range and the Onon River Valley, the terrain becomes more hilly and forested. In the nadir image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the multi-angle composite, open-canopy forested areas are indicated by green hues. Since this is a multi-angle composite, the green color arises not from the color of the leaves but from the architecture of the surface cover. The green areas appear brighter at the nadir angle than at the oblique angles because more of the snow-covered surface in the gaps between the trees is visible. Color variations in the multi-angle composite also indicate angular reflectance properties for areas covered by snow and ice. The light blue color of the frozen lakes is due to the increased forward scattering of smooth ice, and light orange colors indicate rougher ice or snow, which scatters more light in the backward direction. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire Earth between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 22525. The panels cover an area of about 355 kilometers x 380 kilometers, and utilize data from blocks 50 to 52 within World Reference System-2 path 126. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Seismic behavior of outrigger truss-wall shear connections using multiple steel angles
NASA Astrophysics Data System (ADS)
Li, Xian; Wang, Wei; Lü, Henglin; Zhang, Guangchang
2016-06-01
An experimental investigation on the seismic behavior of a type of outrigger truss-reinforced concrete wall shear connection using multiple steel angles is presented. Six large-scale shear connection models, which involved a portion of reinforced concrete wall and a shear tab welded onto a steel endplate with three steel angles, were constructed and tested under combined actions of cyclic axial load and eccentric shear. The effects of embedment lengths of steel angles, wall boundary elements, types of anchor plates, and thicknesses of endplates were investigated. The test results indicate that properly detailed connections exhibit desirable seismic behavior and fail due to the ductile fracture of steel angles. Wall boundary elements provide beneficial confinement to the concrete surrounding steel angles and thus increase the strength and stiffness of connections. Connections using whole anchor plates are prone to suffer concrete pry-out failure while connections with thin endplates have a relatively low strength and fail due to large inelastic deformations of the endplates. The current design equations proposed by Chinese Standard 04G362 and Code GB50011 significantly underestimate the capacities of the connection models. A revised design method to account for the influence of previously mentioned test parameters was developed.
Fundamental limits on isoplanatic correction with multiconjugate adaptive optics
NASA Astrophysics Data System (ADS)
Lloyd-Hart, Michael; Milton, N. Mark
2003-10-01
We investigate the performance of a general multiconjugate adaptive optics (MCAO) system in which signals from multiple reference beacons are used to drive several deformable mirrors in the optical beam train. Taking an analytic approach that yields a detailed view of the effects of low-order aberration modes defined over the metapupil, we show that in the geometrical optics approximation, N deformable mirrors conjugated to different ranges can be driven to correct these modes through order N with unlimited isoplanatic angle, regardless of the distribution of turbulence along the line of sight. We find, however, that the optimal deformable mirror shapes are functions of target range, so the best compensation for starlight is in general not the correction that minimizes the wave-front aberration in a laser guide beacon. This introduces focal anisoplanatism in the wave-front measurements that can be overcome only through the use of beacons at several ranges. We derive expressions for the number of beacons required to sense the aberration to arbitrary order and establish necessary and sufficient conditions on their geometry for both natural and laser guide stars. Finally, we derive an expression for the residual uncompensated error by mode as a function of field angle, target range, and MCAO system geometry.
Normal correspondence of tectal maps for saccadic eye movements in strabismus
Economides, John R.; Adams, Daniel L.
2016-01-01
The superior colliculus is a major brain stem structure for the production of saccadic eye movements. Electrical stimulation at any given point in the motor map generates saccades of defined amplitude and direction. It is unknown how this saccade map is affected by strabismus. Three macaques were raised with exotropia, an outwards ocular deviation, by detaching the medial rectus tendon in each eye at age 1 mo. The animals were able to make saccades to targets with either eye and appeared to alternate fixation freely. To probe the organization of the superior colliculus, microstimulation was applied at multiple sites, with the animals either free-viewing or fixating a target. On average, microstimulation drove nearly conjugate saccades, similar in both amplitude and direction but separated by the ocular deviation. Two monkeys showed a pattern deviation, characterized by a systematic change in the relative position of the two eyes with certain changes in gaze angle. These animals' saccades were slightly different for the right eye and left eye in their amplitude or direction. The differences were consistent with the animals' underlying pattern deviation, measured during static fixation and smooth pursuit. The tectal map for saccade generation appears to be normal in strabismus, but saccades may be affected by changes in the strabismic deviation that occur with different gaze angles. PMID:27605534
MISR Images Forest Fires and Hurricane
NASA Technical Reports Server (NTRS)
2000-01-01
These images show forest fires raging in Montana and Hurricane Hector swirling in the Pacific. These two unrelated, large-scale examples of nature's fury were captured by the Multi-angle Imaging SpectroRadiometer(MISR) during a single orbit of NASA's Terra satellite on August 14, 2000.
In the left image, huge smoke plumes rise from devastating wildfires in the Bitterroot Mountain Range near the Montana-Idaho border. Flathead Lake is near the upper left, and the Great Salt Lake is at the bottom right. Smoke accumulating in the canyons and plains is also visible. This image was generated from the MISR camera that looks forward at a steep angle (60 degrees); the instrument has nine different cameras viewing Earth at different angles. The smoke is far more visible when seen at this highly oblique angle than it would be in a conventional, straight-downward (nadir)view. The wide extent of the smoke is evident from comparison with the image on the right, a view of Hurricane Hector acquired from MISR's nadir-viewing camera. Both images show an area of approximately 400 kilometers (250 miles)in width and about 850 kilometers (530 miles) in length.When this image of Hector was taken, the eastern Pacific tropical cyclone was located approximately 1,100 kilometers (680 miles) west of the southern tip of Baja California, Mexico. The eye is faintly visible and measures 25 kilometers (16 miles) in diameter. The storm was beginning to weaken, and 24hours later the National Weather Service downgraded Hector from a hurricane to a tropical storm.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.govCaudal Septal Stabilization Suturing Technique to Treat Crooked Noses.
Baykal, Bahadir; Erdim, Ibrahim; Guvey, Ali; Oghan, Fatih; Kayhan, Fatma Tulin
2016-10-01
To rotate the nasal axis and septum to the midline using an L-strut graft and a novel caudal septal stabilization suturing technique to treat crooked noses. Thirty-six patients were included in the study. First, an L-strut graft was prepared by excising the deviated cartilage site in all patients. Second, multiple stabilization suturing, which we describe as a caudal septal stabilization suturing technique with a "fishing net"-like appearance, was applied between the anterior nasal spine and caudal septum in all patients. This new surgical technique, used to rotate the caudal septum, was applied to 22 I-type and 14 C-type crooked noses. Correction rates for the crooked noses were compared between the 2 inclination types with angular estimations. Deviation angles were measured using the AutoCAD 2012 software package and frontal (anterior) views, with the Frankfurt horizontal line parallel to the ground. Nasal axis angles showing angle improvement graded 4 categories as excellent, good, acceptable, and unsuccessful for evaluations at 6 months after surgery in the study. The success rate in the C-type nasal inclination was 86.7% (±21.9) and 88% (±16.7) in the I-type. The overall success rate of L-strut grafting and caudal septal stabilization suturing in crooked nose surgeries was 87.5% (±18.6). "Unsuccessful" results were not reported in any of the patients. L-strut grafting and caudal septal stabilization suturing techniques are efficacious in crooked noses according to objective measurement analysis results. However, a longer follow-up duration in a larger patient population is needed.
The Wavelength Dependence of the Lunar Phase Curve as Seen by the LRO LAMP
NASA Astrophysics Data System (ADS)
Liu, Y.; Retherford, K. D.; Greathouse, T. K.; Hendrix, A. R.; Mandt, K.; Gladstone, R.; Cahill, J. T.; Egan, A.; Kaufmann, D. E.; Grava, C.; Pryor, W. R.
2016-12-01
The Lunar Reconnaissance Orbiter (LRO) Lyman Alpha Mapping Project (LAMP) provides global coverage of both nightside and dayside of the Moon in the far ultraviolet (FUV) wavelengths. The nightside observations use roughly uniform diffuse illumination sources from interplanetary medium Lyman-α sky glow and UV-bright stars so that traditional photometric corrections do not apply. In contrast, the dayside observations use sunlight as its illumination source where bidirectional reflectance is measured. The bidirectional reflectance is dependent on the incident, emission, and phase angles as well as the soil properties. Thus the comparisons of dayside mapping and nightside mapping techniques offer a method for cross-comparing the photometric correction factors because the observations are made under different lighting and viewing conditions. Specifically, the nightside data well constrain the single-scattering coefficient. We'll discuss the wavelength dependence of the lunar phase curve as seen by the LAMP instrument in dayside data. Our preliminary results indicate that the reflectance in the FUV wavelengths decreases with the increasing phase angles from 0° to 90°, similar to the phase curve in the UV-visible wavelengths as studied by Hapke et al. (2012) using LRO wide angle camera (WAC) data, among other visible-wavelength lunar studies. Particularly, we'll report how coherent backscattering and shadow hiding contribute to the opposition surge, given the fact that the albedo at FUV wavelengths is extremely low and thus multiple scattering is significantly less important. Finally, we'll report the derived Hapke parameters at FUV wavelengths for our study areas.
Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery
NASA Technical Reports Server (NTRS)
Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr
2008-01-01
The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to that of an endo-scissor, but the linkage would be configured to enable adjustment of the camera angle instead of actuation of a scissor blade. It is envisioned that thicknesses of the tool shaft and the camera would be less than 4 mm, so that the camera-tipped tool could be swiftly inserted and withdrawn through a dime-size opening. Electronic cameras having dimensions of the order of millimeters are already commercially available, but their designs are not optimized for use in endoscopic brain surgery. The variety of potential endoscopic, thoracoscopic, and laparoscopic applications can be expected to increase as further development of electronic cameras yields further miniaturization and improvements in imaging performance.