NASA Astrophysics Data System (ADS)
Basilevsky, A. T.; Shalygina, O. S.; Bondarenko, N. V.; Shalygin, E. V.; Markiewicz, W. J.
2017-09-01
The aim of this work is a comparative study of several typical radar-dark parabolas, the neighboring plains and some other geologic units seen in the study areas which include craters Adivar, Bassi, Bathsheba, du Chatelet and Sitwell, at two depths scales: the upper several meters of the study object available through the Magellan-based microwave (at 12.6 cm wavelength) properties (microwave emissivity, Fresnel reflectivity, large-scale surface roughness, and radar cross-section), and the upper hundreds microns of the object characterized by the 1 micron emissivity resulted from the analysis of the near infra-red (NIR) irradiation of the night-side of the Venusian surface measured by the Venus Monitoring Camera (VMC) on-board of Venus Express (VEx).
NASA Astrophysics Data System (ADS)
Sánchez-Lavega, A.; Chen-Chen, H.; Ordoñez-Etxeberria, I.; Hueso, R.; del Río-Gaztelurrutia, T.; Garro, A.; Cardesín-Moinelo, A.; Titov, D.; Wood, S.
2018-01-01
The Visual Monitoring Camera (VMC) onboard the Mars Express (MEx) spacecraft is a simple camera aimed to monitor the release of the Beagle-2 lander on Mars Express and later used for public outreach. Here, we employ VMC as a scientific instrument to study and characterize high altitude aerosols events (dust and condensates) observed at the Martian limb. More than 21,000 images taken between 2007 and 2016 have been examined to detect and characterize elevated layers of dust in the limb, dust storms and clouds. We report a total of 18 events for which we give their main properties (areographic location, maximum altitude, limb projected size, Martian solar longitude and local time of occurrence). The top altitudes of these phenomena ranged from 40 to 85 km and their horizontal extent at the limb ranged from 120 to 2000 km. They mostly occurred at Equatorial and Tropical latitudes (between ∼30°N and 30°S) at morning and afternoon local times in the southern fall and northern winter seasons. None of them are related to the orographic clouds that typically form around volcanoes. Three of these events have been studied in detail using simultaneous images taken by the MARCI instrument onboard Mars Reconnaissance Orbiter (MRO) and studying the properties of the atmosphere using the predictions from the Mars Climate Database (MCD) General Circulation Model. This has allowed us to determine the three-dimensional structure and nature of these events, with one of them being a regional dust storm and the two others water ice clouds. Analyses based on MCD and/or MARCI images for the other cases studied indicate that the rest of the events correspond most probably to water ice clouds.
State of the Venus Atmosphere from Venus Express at the time of MESSENGER FLy- By
NASA Astrophysics Data System (ADS)
Limaye, S. S.; Markiewicz, W. J.; Titov, D.; Piccione, G.; Baines, K. H.; Robinson, M.
2007-12-01
The Venus Monitoring Camera (VMC) and the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) instruments on Venus Express spacecraft have been observing Venus since orbit insertion in April 2006. The state of the atmosphere in 2006 was in the form of a hemispheric vortex centered over the south pole, and presumably, another one in the northen hemisphere. The VMC and VIRTIS data have been used to determine cloud motions as well as the structure and organization of the atmospheric circulation from the the data collected since June 2006. In June 2007, the MESSENGER spacecraft flew-past Venus and also observed Venus on approach and departure from Venus. We report on the atmosphere of Venus as it appeared during this period.
Limb clouds and dust on Mars from VMC-Mars Express images
NASA Astrophysics Data System (ADS)
Sanchez-Lavega, Agustin; Chen, Hao Chen; Ordoñez-Etxeberria, Iñaki; Hueso, Ricardo; Cardesin, Alejandro; Titov, Dima; Wood, Simon
2016-10-01
We have used the large image database generated by the Visual Monitoring Camera (VMC) onboard Mars Express to first search and then study, the properties of projected features (dust and water clouds) on the planet limb. VMC is a small camera serving since 2007 for public education and outreach (Ormston et al., 2011). The camera consists of a CMOS sensor with a Bayer filter mosaic providing color images in the wavelength range 400-900 nm. Since the observations were performed in an opportunistic mode (nor planned on a science base) the captured events occurred in a random mode. In total 17 limb features were observed in the period spanning from April 2007 to August 2015. Their extent at limb varies from about 100 km for the smaller ones to 2,000 km for the major ones. They showed a rich morphology consisting in series of patchy elements with a uniform top layer located at altitudes ranging from 30 to 85 km. The features are mostly concentrated between latitudes 45 deg North and South covering most longitudes although a greater concentration occurs around -90 to +90 deg. from the reference meridian (i.e. longitude 0 degrees, East or West). Most events in the southern hemisphere occurred for orbital longitudes 0-90 degrees (autumnal season) and in the north for orbital longitudes 330-360 (winter season). We present a detailed study of two of these events, one corresponding to a dust storm observed also with the MARCI instrument onboard Mars Reconnaissance Orbiter, and a second one corresponding to a water cloud.
NASA Astrophysics Data System (ADS)
Khatuntsev, I. V.; Patsaeva, M. V.; Titov, D. V.; Ignatiev, N. I.; Turin, A. V.; Fedorova, A. A.; Markiewicz, W. J.
2017-11-01
For more than 8 years the Venus Monitoring Camera (VMC) onboard the Venus Express orbiter performed continuous imaging of the Venus cloud layer in UV, visible and near-IR filters. We applied the correlation approach to sequences of the near-IR images at 965 nm to track cloud features and determine the wind field in the middle and lower cloud (49-57 km). From the VMC images that spanned from December of 2006 through August of 2013 we derived zonal and meridional components of the wind field. In low-to-middle latitudes (5-65°S) the velocity of the retrograde zonal wind was found to be 68-70 m/s. The meridional wind velocity slowly decreases from peak value of +5.8 ± 1.2 m/s at 15°S to 0 at 65-70°S. The mean meridional speed has a positive sign at 5-65°S suggesting equatorward flow. This result, together with the earlier measurements of the poleward flow at the cloud tops, indicates the presence of a closed Hadley cell in the altitude range 55-65 km. Long-term variations of zonal and meridional velocity components were found during 1,200 Earth days of observation. At 20° ± 5°S the zonal wind speed increases from -67.18 ± 1.81 m/s to -77.30 ± 2.49 m/s. The meridional wind gradually increases from +1.30 ± 1.82 m/s to +8.53 ± 2.14 m/s. Following Bertaux et al. (2016) we attribute this long-term trend to the influence from the surface topography on the dynamical process in the atmosphere via the upward propagation of gravity waves that became apparent in the VMC observations due to slow drift of the Venus Express orbit over Aphrodite Terra.
Cloud level winds from UV and IR images obtained by VMC onboard Venus Express
NASA Astrophysics Data System (ADS)
Khatuntsev, Igor; Patsaeva, Marina; Titov, Dmitri; Ignatiev, Nikolay; Turin, Alexander; Bertaux, Jean-Loup
2017-04-01
During eight years Venus Monitoring Camera (VMC) [1] onboard the Venus Express orbiter has observed the upper cloud layer of Venus. The largest set of images was obtained in the UV (365 nm), visible (513 nm) and two infrared channels - 965 nm and 1010 nm. The UV dayside images were used to study the atmospheric circulation at the Venus cloud tops [2], [3]. Mean zonal and meridional profiles of winds and their variability were derived from cloud tracking of UV images. In low latitudes the mean retrograde zonal wind at the cloud top (67±2 km) is about 95 m/s with a maximum of about 102 m/s at 40-50°S. Poleward from 50°S the zonal wind quickly fades out with latitude. The mean poleward meridional wind slowly increases from zero value at the equator to about 10 m/s at 50°S. Poleward from this latitude, the absolute value of the meridional component monotonically decreases to zero at the pole. The VMC observations suggest clear diurnal signature in the wind field. They also indicate a long term trend for the zonal wind speed at low latitudes to increase from 85 m/s in the beginning of the mission to 110 m/s by the middle of 2012. The trend was explained by influence of the surface topography on the zonal flow [4]. Cloud features tracking in the IR images provided information about winds in the middle cloud deck (55±4 km). In the low and middle latitudes (5-65°S) the IR mean retrograde zonal velocity is about 68-70 m/s. In contrast to poleward flow at the cloud tops, equatorward motions dominate in the middle cloud with maximum speed of 5.8±1.2 m/s at latitude 15°S. The meridional speed slowly decreases to 0 at 65-70°S. At low latitudes the zonal and meridional speed demonstrate long term variations. Following [4] we explain the observed long term trend of zonal and meridional components by the influence of surface topography of highland region Aphrodite Terra on dynamic processes in the middle cloud deck through gravity waves. Acknowledgements: I.V. Khatuntsev, M.V. Patsaeva, N.I. Ignatiev, J.-L. Bertaux were supported by the Ministry of Education and Science of Russian Federation grant 14.W03.31.0017. References: [1] Markiewicz W. J. et al.: Venus Monitoring Camera for Venus Express // Planet. Space Sci., 55(12), 1701-1711. doi:10.1016/j.pss.2007.01.004, 2007. [2] Khatuntsev I.V. et al.: Cloud level winds from the Venus Express Monitoring Camera imaging // Icarus, 226, 140-158. 2013. [3] Patsaeva M.V. et al.: The relationship between mesoscale circulation and cloud morphology at the upper cloud level of Venus from VMC/Venus Express // Planet. Space Sci., 113(08), 100-108, doi:10.1016/j.pss.2015.01.013, 2015. [4] Bertaux J.-L. et al.: Influence of Venus topography on the zonal wind and UV albedo at cloud top level: The role of stationary gravity waves // J. Geophys. Res. Planets, 121, 1087-1101, doi:10.1002/2015JE004958, 2016.
Search for ongoing volcanic activity on Venus: Case study of Maat Mons, Sapas Mons and Ozza Mons
NASA Astrophysics Data System (ADS)
Basilevsky, A. T.; Shalygin, E. V.; Markiewicz, W. J.; Titov, D. V.; Roatsch, Th.; Kreslavsky, M. A.
2012-04-01
Maat Mons volcano and its vicinities show evidence of geologically very recent volcanism. We consider Venus Monitoring Camera (VMC) night-side images of this area. Analysis of VMC images taken in 12 observation sessions during the time period from 31 Oct 2007 to 15 Jun 2009 did not reveal any suspicious high-emission spots which could be signatures of the presently ongoing volcanic eruptions. If Maat Mons volcano had the eruption history similar to that of Mauna Loa, Hawaii, in the 20th century, the probability to observe an eruption in this VMC observation sequence would be about 8%, meaning that the absence of detection does not mean that Maat is not active in the present epoch. Blurring of the thermal radiation coming from Venus surface by the planet atmosphere decreases detectability of thermal signature of fresh lavas. We simulated near-infrared images of the study area with artificially added lava flows having surface temperature 1000 K and various areas. These simulations showed that 1 km2 lava flows should be marginally seen by VMC. An increase of the lava surface area to 2 - 3 km2 makes them visible on the plains and increase of the area to 4 - 5 km2 makes them visible even in deep rift zones. Typical individual lava flows on Mauna Loa are a few km2, however, they often have been formed during weeks to months and the instantaneous size of the hot flow surface was usually much smaller. Thus the detection probability is significantly lower than 8%, but it is far from negligible. Our consideration suggests that further search of Maat Mons area and other areas including young rift zones makes sense and should be continued. More effective search could be done if observations simultaneously cover most part of the night side of Venus for relatively long (years) time of continuous observations.
NASA Astrophysics Data System (ADS)
Shalygin, E. V.; Basilevsky, A. T.; Markiewicz, W. J.; Titov, D. V.; Kreslavsky, M. A.; Roatsch, Th.
2012-12-01
We report on attempts to find the ongoing volcanic activity from near-infrared night-time observations with the Venus Monitoring Camera (VMC) onboard of Venus Express. Here we consider VMC images of the areas of Maat Mons volcano and its vicinities, which, as it follows from analysis of the Magellan data, show evidence of geologically very recent volcanism. Analysis of VMC images taken in 12 observation sessions during the time period from 31 October 2007 to 15 June 2009 did not reveal any suspicious high-emission spots which could be signatures of the presently ongoing volcanic eruptions. We compare this time sequence of observations with the history of eruptions of volcano Mauna Loa, Hawaii, in the 20th century. This comparison shows that if Maat Mons volcano had the eruption history similar to that of Mauna Loa, the probability to observe an eruption in this VMC observation sequence would be about 8%, meaning that the absence of detection does not mean that Maat is not active in the present epoch. These estimates do not consider the effect of absorption and blurring of the thermal radiation coming from Venus surface by the planet atmosphere and clouds, which decreases detectability of thermal signature of fresh lavas. To assess the role of this effect we simulated near-infrared images of the study area with artificially added circular and rectangular (with different aspect ratios) lava flows having surface temperature 1000 K and various areas. These simulations showed that 1 km2 lava flows should be marginally seen by VMC. An increase of the lava surface area to 2-3 km2 makes them visible on the plains and increase of the area to 4-5 km2 makes them visible even in deep rift zones. Typical individual lava flows on Mauna Loa are a few km2, however, they often have been formed during weeks to months and the instantaneous size of the hot flow surface was usually much smaller. Thus the detection probability is significantly lower than 8%, but it is far from negligible. Our consideration suggests that further search of Maat Mons area and other areas including young rift zones makes sense and should be continued. More effective search could be done if observations simultaneously cover most part of the night side of Venus for relatively long (years) time of continuous observations.
Periodical oscillation of zonal wind velocities at the cloud top of Venus
NASA Astrophysics Data System (ADS)
Kouyama, T.; Imamura, T.; Nakamura, M.; Satoh, T.; Futaana, Y.
2010-12-01
Zonal wind velocity of Venus increases with height and reaches about 100 m s-1 at the cloud top level (~70km). The speed is approximately 60 times faster than the rotation speed of the solid body of Venus (~1.6 m s-1, at the equator) and this phenomenon is called a "super-rotation". From previous observations, it is known that the super-rotation changes on a long timescale. At the cloud top level, it was suggested that the super-rotation has a few years period oscillation based on observations made by Pioneer Venus orbiter of USA from 1979 to 1985 (Del Genio et al.,1990). However, the period, the amplitude, the spatial structure and the mechanism of the long period oscillation have not been understood well. Venus Express (VEX) of European Space Agency has been observing Venus since its orbital insertion in April 2006. Venus Monitoring Camera (VMC) onboard VEX has an ultra violet (UV) filter (365 nm), and VMC has taken day-side cloud images at the cloud top level with this filter. Such images exhibit various cloud features made by unknown UV absorber in the atmosphere. For investigating the characteristics of long-timescale variations of the super-rotation, we analyzed zonal velocity fields derived from UV cloud images from May 2006 to January 2010 using a cloud tracking method. UV imaging of VMC is done when the spacecraft is in the ascending portion of its elongated polar orbit. Since the orbital plane is nearly fixed in the inertial space, the local time of VMC/UV observation changes with a periodicity of one Venus year. As a result, periods when VMC observation covered day-side areas of Venus, large enough for cloud trackings, are not continuous. For deriving wind velocities we were able to use cloud images taken in 280 orbits during this period. The derived zonal wind velocity from 10°S to 40°S latitude shows a prominent year-to-year variation, and the variation is well fitted by a periodical oscillation with a period of about 260 Earth days, although not all phases of the variation were observed. The 260 day period is longer than the length of one day of Venus (~117 days) and somewhat longer than the orbital revolution period (~225 days) of Venus. In the equatorial region, the amplitude of this oscillation is about 12 m s-1 with the background zonal wind speed of about 95 m s-1. The oscillation period is shorter than the long-term oscillation reported by PVO. Such oscillation has not been reported most probably because previous Venus observations had limitations of observation chances to identify the oscillations with such a period.
Video-mediated communication to support distant family connectedness.
Furukawa, Ryoko; Driessnack, Martha
2013-02-01
It can be difficult to maintain family connections with geographically distant members. However, advances in computer-human interaction (CHI) systems, including video-mediated communication (VMC) are emerging. While VMC does not completely substitute for physical face-to-face communication, it appears to provide a sense of virtual copresence through the addition of visual and contextual cues to verbal communication between family members. The purpose of this study was to explore current patterns of VMC use, experiences, and family functioning among self-identified VMC users separated geographically from their families. A total of 341 participants (ages 18 to above 70) completed an online survey and Family APGAR. Ninty-six percent of the participants reported that VMC was the most common communication method used and 60% used VMC at least once/week. The most common reason cited for using VMC over other methods of communication was the addition of visual cues. A significant difference between the Family APGAR scores and the number of positive comments about VMC experience was also found. This exploratory study provides insight into the acceptance of VMC and its usefulness in maintaining connections with distant family members.
An Analysis of Helicopter Pilot Scan Techniques While Flying at Low Altitudes and High Speed
2012-09-01
Manager SV Synthetic Vision TFH Total Flight Hours TOFT Tactical Operational Flight Trainer VFR Visual Flight Rules VMC Visual Meteorological...Crognale, 2008). Recently, the use of synthetic vision (SV) and a heads-up- display (HUD) have been a topic of discussion in the aviation community... Synthetic vision uses external cameras to provide the pilot with an enhanced view of the outside world, usually with the assistance of night vision
NASA Astrophysics Data System (ADS)
Patsaeva, Marina; Khatuntsev, Igor; Turin, Alexander; Zasova, Ludmila; Bertaux, Jean-loup
2017-04-01
A set of UV (365 nm) and IR (965 nm) images obtained by the Venus Monitoring Camera (VMC) was used to study the circulation of the mesosphere at two altitude levels. Displacement vectors were obtained by wind tracking in automated mode for observation period from 2006 to 2014 for UV images [1,2] and from 2006 to 2012 for IR images. The long observation period and good longitude-latitude coverage by single measurements allowed us to focus on the study of the slow-periodic component. The influence of the underlying surface topography on the change of speed of the average zonal wind at UV level at low latitudes, discovered by visual methods has been described in [3]. Analysis of the longitude-latitude distribution of the zonal and meridional components for 172000 (257 orbits) digital individual wind measurements at UV level and for 32,000 (150 orbits) digital individual wind measurements at IR level allows us to compare the influence of Venus topography on the change of the zonal and meridional components at both cloud levels. At the UV level (67±2 km) longitudinal profiles of the zonal speed for different latitude bins in low latitudes correlate with surface profiles. These correlations are most noticeable in the region of Aphrodite Terra. The correlation shift depends on the surface height. Albedo profiles correlate with surface profiles also at high latitudes. Zonal speed profiles at low latitude (5-15°S) depend not only on altitude, but also on local time. Minimum of the zonal speed is observed over Aphrodite Terra (90-100°E) at about 12 LT. A diurnal harmonic with an extremum over Aphrodite Terra was found. It can be considered as a superposition of a solar-synchronous tide and a stationary wave caused by interaction of the windstream with the surface. At the IR level (55±4 km) a correlation between surface topography and meridional speed was found in the region 10-30°S. The average meridional flow is equatorward at the IR level, but in the region Aphrodite Terra it is poleward. Acknowledgements: M.V. Patsaeva, I.V. Khatuntsev and J.-L. Bertaux were supported by the Ministry of Education and Science of Russian Federation grant 14.W03.31.0017. References: [1] Khatuntsev, I.V., M.V. Patsaeva, D.V. Titov, N.I. Ignatiev, A.V. Turin, S.S. Limaye, W.J. Markiewicz, M. Almeida, T. Roatsch and R. Moissl (2013), Cloud level winds from the Venus Express Monitoring Camera imaging., Icarus, 226, 140-158. [2] Patsaeva, M.V., I.V. Khatuntsev, D.V. Patsaev, D.V. Titov, N.I. Ignatiev, W.J. Markiewicz, A.V. Rodin (2015), The relationship between mesoscale circulation and cloud morphology at the upper cloud level of Venus from VMC/Venus Express, Planet. Space Sci. 113(08), 100-108, doi:10.1016/j.pss.2015.01.013. [3] Bertaux, J.-L., I. V. Khatuntsev, A. Hauchecorne, W. J. Markiewicz, E. Marcq, S. Lebonnois, M. Patsaeva, A. Turin, and A. Fedorova (2016), Influence of Venus topography on the zonal wind and UV albedo at cloud top level: The role of stationary gravity waves, J. Geophys. Res. Planets, 121, 1087-1101, doi:10.1002/2015JE004958.
VizieR Online Data Catalog: VISTA Magellanic Survey (VMC) catalog (Cioni+, 2011)
NASA Astrophysics Data System (ADS)
Cioni, M.-R. L.; Clementini, G.; Girardi, L.; Guandalini, R.; Gullieuszik, M.; Miszalski, B.; Moretti, M.-I.; Ripepi, V.; Rubele, S.; Bagheri, G.; Bekki, K.; Cross, N.; de Blok, W. J. G.; de Grijs, R.; Emerson, J. P.; Evans, C. J.; Gibson, B.; Gonzales-Solares, E.; Groenewegen, M. A. T.; Irwin, M.; Ivanov, V. D.; Lewis, J.; Marconi, M.; Marquette, J.-B.; Mastropietro, C.; Moore, B.; Napiwotzki, R.; Naylor, T.; Oliveira, J. M.; Read, M.; Sutorius, E.; van Loon, J. Th.; Wilkinson, M. I.; Wood, P. R.
2017-11-01
The VISTA survey of the Magellanic Clouds system (VMC) survey is a homogeneous and uniform YJKs survey of ~184deg2 across the Magellanic system. Observations were obtained with the VISTA telescope as part of the VISTA survey of the Magellanic Cloud system (VMC; ESO program 179.B-2003). This data release is based on the observa tions of twelve new VMC survey tiles LMC 35, 42, 43, 73, 93, SMC 43, 52, 54, BRI 28, 35, and STR 11, 21. Observations were acquired between November 2009 and August 2013. (1 data file).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lin; Zhang, Ming; Yan, Rui
Viral myocarditis (VMC) is closely related to apoptosis, oxidative stress, innate immunity, and energy metabolism, which are all linked to mitochondrial dysfunction. A close nexus between mitochondrial dynamics and cardiovascular disease with mitochondrial dysfunction has been deeply researched, but there is still no relevant report in viral myocarditis. In this study, we aimed to explore the role of Dynamin-related protein 1 (Drp1)-linked mitochondrial fission in VMC. Mice were inoculated with the Coxsackievirus B3 (CVB3) and treated with mdivi1 (a Drp1 inhibitor). Protein expression of Drp1 was increased in mitochondria while decreased in cytoplasm and accompanied by excessive mitochondrial fission inmore » VMC mice. In addition, midivi1 treatment attenuate inflammatory cells infiltration in myocardium of the mice, serum Cardiac troponin I (CTnI) and Creatine kinase-MB (CK-MB) level. Mdivi1 also could improved the survival rate of mice and mitochondrial dysfunction reflected as the up-regulated mitochondrial marker enzymatic activities of succinate dehydrogenase (SDH), cytochrome c oxidase (COX) and mitochondrial membrane potential (MMP). At the same time, mdivi1 rescued the body weight loss, myocardial injury and apoptosis of cardiomyocyte. Furthermore, decease in LVEDs and increase in EF and FS were detected by echocardiogram, which indicated the improved myocardial function. Thus, Drp1-linked excessive mitochondrial fission contributed to VMC and midivi1 may be a potential therapeutic approach. - Highlights: • The expression of Drp1 is significantly increased in mitochondria while decreased in cytoplasm in VMC mice. • Drp1-linked excessive mitochondrial fission is involved in VMC. • Midivi1 treatment mitigate the mitochondrial damage, inflammation, apoptosis in VMC mice. • The disturbance of mitochondrial dynamics may be a new therapeutic target for VMC.« less
2011-12-01
Suitcase Portable Charger (SPC), Vehicle - Mounted Charger (VMC), Solar Portable Power System (SPACES) 15. NUMBER OF PAGES 77 16. PRICE CODE 17...battery (MCCOC, 2010): the Soldier Portable Charger (SPC), the Vehicle Mounted Charger (VMC), and the Solar Portable Alternative Communication Energy...Suitcase Portable Charger TO&E Table of Organization and Equipment UHF Ultra High Frequency VHF Very High Frequency VMC Vehicle Mounted
NASA Astrophysics Data System (ADS)
Bertaux, Jean-Loup; Khatunstsev, Igor; Hauchecorne, Alain; Markiewicz, Wojciech; Marcq, Emmanuel; Lebonnois, Sébastien; Patsaeva, Marina; Turin, Alexander
2015-04-01
UV images (at 365 nm) of Venus cloud top collected with VMC camera on board Venus Express allowed to derive a large number of wind measurements at altitude 67±2 km from tracking of cloud features in the period 2006-2012. Both manual (45,600) and digital (391,600) individual wind measurements over 127 orbits were analyzed showing various patterns with latitude and local time. A new longitude-latitude geographic map of the zonal wind shows a conspicuous region of strongly decreased zonal wind, a remarkable feature that was unknown up to now. While the average zonal wind near equator (from 5°S to 15°s) is -100.9 m/s in the longitude range 200-330°, it reaches -83.4 m/s in the range 60-100°, a difference of 17.5 m/s. When compared to the altimetry map of Venus, it is found that the zonal wind pattern is well correlated with the underlying relief in the region of Aphrodite Terra, with a downstream shift of about 30° (˜3,200 km). We interpret this pattern as the result of stationary gravity waves produced at ground level by the up lift of air when the horizontal wind encounters a mountain slope. These waves can propagate up to cloud top level, break there and transfer their momentum to the zonal flow. A similar phenomenon is known to operate on Earth with an influence on mesospheric winds. The LMD-GCM for Venus was run with or without topography, with and without a parameterization of gravity waves and does not display such an observed change of velocity near equator. The cloud albedo map at 365 nm varies also in longitude and latitude. We speculate that it might be the result of increased vertical mixing associated to wave breaking, and decreased abundance of the UV absorber which makes the contrast in images. The impact of these new findings on current super rotation theories remains to be assessed. This work was triggered by the presence of a conspicuous peak at 117 days in a time series of wind measurements. This is the length of the solar day as seen at the ground of Venus. Since VMC measurements are done preferably in a local time window centred on the sub-solar point, any parameter having a geographic longitude dependence will show a peak at 117 days.
Human Factors Engineering #3 Crewstation Assessment for the OH-58F Helicopter
2014-03-01
Additionally, workload was assessed for level of interoperability 2 (LOI 2) tasks that the aircrew performed with an unmanned aircraft system (UAS...TTP tactics, techniques, and procedures UAS unmanned aircraft system 47 VFR visual flight rules VMC visual meteorological conditions VTR...For example, pilots often perform navigation tasks, communicate via multiple radios, monitor aircraft systems , and assist the pilot on the controls
Inhibition of 12/15-LO ameliorates CVB3-induced myocarditis by activating Nrf2.
Ai, Feng; Zheng, Jiayong; Zhang, Yanwei; Fan, Taibing
2017-06-25
Cardiac 12/15-lipoxygenase (12/15-LO) was reported to be markedly up-regulated and involved in the development of heart failure. Nuclear factor E2-related factor 2 (Nrf2) plays anti-inflammatory and anti-oxidation roles in response to oxidative stress. However, the role of 12/15-LO in viral myocarditis (VMC) and its underlying molecular mechanism have not yet been elucidated. Here, we demonstrated that 12/15-LO was up-regulated and Nrf2 was down-regulated in coxsackievirus B3 (CVB3)-infected mice and cardiac myocytes. Baicalein, the specific inhibitor of 12/15-LO, was employed to investigate the role of 12/15-LO and its underlying mechanism in VMC. We found that baicalein treatment alleviated CVB3-induced VMC mouse models, as demonstrated by less inflammatory lesions in the heart tissues and less CK-MB level. Moreover, baicalein treatment attenuated CVB3-induced inflammatory cytokine production and oxidative stress. Mechanistic analysis suggested that baicalein treatment relieved CVB3-induced reduction of Nrf2 and heme oxygenase-1 (HO-1) expressions. Taken together, our study indicated that inhibition of 12/15-LO ameliorates VMC by activating Nrf2, providing a new therapeutic strategy for the therapy of VMC. Copyright © 2017 Elsevier B.V. All rights reserved.
Safety recommendation : visual meteorological conditions (VMC)
DOT National Transportation Integrated Search
1999-06-01
On April 4, 1998 at 1034 eastern standard time, N111LR, a Cessna 525 CitiationJEt, and N737Wd, a Cessna 172 Skyhawk, collided in flight over Marietta, Georgia. Visual meteorological conditions (VMC) prevailed at the time of the accident. The Citation...
Akkurt, M; Çakır, A; Shidfar, M; Çelikkol, B P; Söylemezoğlu, G
2012-08-13
We used molecular markers associated with seedlessness in grapes, namely SCC8, SCF27 and VMC7f2, to improve the efficiency of seedless grapevine breeding via marker assisted selection (MAS). DNA from 372 F₁ hybrid progeny from the cross between seeded "Alphonse Lavallée" and seedless "Sultani" was amplified by PCR using three markers. After digestion of SCC8 marker amplification products by restriction enzyme BgIII, 40 individuals showed homozygous SCC8+/SCC8+ alleles at the seed development inhibitor (SdI) locus. DNA from 80 of the progeny amplified with the SCF27 marker produced bands; 174 individuals had 198-bp alleles of the VMC7f2 marker associated with seedlessness. In the second year, based on MAS, 183 F₁ hybrids were designated as seedless grapevine candidates because they were positive for a minimum of one marker. Twenty individuals were selected as genetic resources for future studies on seedless grapevine breeding because they carried alleles for the three markers associated with seedlessness. The VMC7f2 SSR marker was identified as the marker most associated with seedlessness.
Decision-Making in Flight with Different Convective Weather Information Sources: Preliminary Results
NASA Technical Reports Server (NTRS)
Latorella, Kara A.; Chamberlain, James P.
2004-01-01
This paper reports preliminary and partial results of a flight experiment to address how General Aviation (GA) pilots use weather cues to make flight decisions. This research presents pilots with weather cue conditions typically available to GA pilots in visual meteorological conditions (VMC) and instrument meteorological conditions (IMC) today, as well as in IMC with a Graphical Weather Information System (GWIS). These preliminary data indicate that both VMC and GWIS-augmented IMC conditions result in better confidence, information sufficiency and perceived performance than the current IMC condition. For all these measures, the VMC and GWIS-augmented conditions seemed to provide similar pilot support. These preliminary results are interpreted for their implications on GWIS display design, training, and operational use guidelines. Final experimental results will compare these subjective data with objective data of situation awareness and decision quality.
NASA Astrophysics Data System (ADS)
Oschlisniok, J.; Pätzold, M.; Häusler, B.; Tellmann, S.; Bird, M.; Andert, T.; Remus, S.; Krüger, C.; Mattei, R.
2011-10-01
Earth's nearest planetary neighbour Venus is shrouded within a roughly 22 km thick three-layered cloud deck, which is located approximately 48 km above the surface and extends to an altitude of about 70 km. The clouds are mostly composed of sulfuric acid. The latter is responsible for a strong absorption of radio signals at microwaves, which is observed in radio occultation experiments. The absorption of the radio signal intensity is used to determine the abundance of H2SO4. This way a detailed study of the H2SO4 height distribution within the cloud deck is possible. The Venus Express spacecraft is orbiting Venus since 2006. The Radio Science Experiment VeRa onboard probes the atmosphere with radio signals at 3.4 cm (X-Band) and 13 cm (S-Band). Absorptivity profiles of the 3.4 cm radio wave and the resulting vertical sulfuric acid profiles in the cloud region of Venus' atmosphere are presented. The three-layered structure and a distinct latitudinal variation of H2SO4 are observed. Convective atmospheric motions within the equatorial latitudes, which transport absorbing material from lower to higher altitudes, are clearly visible. Results of the Venus Monitoring Camera (VMC) and the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) are compared with the VeRa results.
Mesospheric circulation at the cloud top level of Venus according to Venus Monitoring Camera images
NASA Astrophysics Data System (ADS)
Khatuntsev, Igor; Patsaeva, Marina; Ignatiev, Nikolay; Titov, Dmitri; Markiewicz, Wojciech; Turin, Alexander
We present results of wind speed measurements at the cloud top level of Venus derived from manual cloud tracking in the UV (365 nm) and IR (965 nm) channels of the Venus Monitoring Camera Experiment (VMC) [1] on board the Venus Express mission. Cloud details have a maximal contrast in the UV range. More then 90 orbits have been processed. 30000 manual vectors were obtained. The period of the observations covers more than 4 venusian year. Zonal wind speed demonstrates the local solar time dependence. Possible diurnal and semidiurnal components are observed [2]. According to averaged latitude profile of winds at level of the upper clouds: -The zonal speed is slightly increasing by absolute values from 90 on the equator to 105 m/s at latitudes —47 degrees; -The period of zonal rotation has the maximum at the equator (5 earth days). It has the minimum (3 days) at altitudes —50 degrees. After minimum periods are slightly increasing toward the South pole; -The meridional speed has a value 0 on the equator, and then it is linear increasing up to 10 m/s (by absolute value) at 50 degrees latitude. "-" denotes movement from the equator to the pole. -From 50 to 80 degrees the meridional speed is again decreasing by absolute value up to 0. IR (965+10 nm) day side images can be used for wind tracking. The obtained speed of the zonal wind in the low and middle latitudes are systematically less than the wind speed derived from the UV images. The average zonal speed obtained from IR day side images in the low and average latitudes is about 65-70 m/s. The given fact can be interpreted as observation of deeper layers of mesosphere in the IR range in comparison with UV. References [1] Markiewicz W. J. et al. (2007) Planet. Space Set V55(12). P.1701-1711. [2] Moissl R., et al. (2008) J. Geophys. Res. 2008. doi:10.1029/2008JE003117. V.113.
Vascular Mural Cells Promote Noradrenergic Differentiation of Embryonic Sympathetic Neurons.
Fortuna, Vitor; Pardanaud, Luc; Brunet, Isabelle; Ola, Roxana; Ristori, Emma; Santoro, Massimo M; Nicoli, Stefania; Eichmann, Anne
2015-06-23
The sympathetic nervous system controls smooth muscle tone and heart rate in the cardiovascular system. Postganglionic sympathetic neurons (SNs) develop in close proximity to the dorsal aorta (DA) and innervate visceral smooth muscle targets. Here, we use the zebrafish embryo to ask whether the DA is required for SN development. We show that noradrenergic (NA) differentiation of SN precursors temporally coincides with vascular mural cell (VMC) recruitment to the DA and vascular maturation. Blocking vascular maturation inhibits VMC recruitment and blocks NA differentiation of SN precursors. Inhibition of platelet-derived growth factor receptor (PDGFR) signaling prevents VMC differentiation and also blocks NA differentiation of SN precursors. NA differentiation is normal in cloche mutants that are devoid of endothelial cells but have VMCs. Thus, PDGFR-mediated mural cell recruitment mediates neurovascular interactions between the aorta and sympathetic precursors and promotes their noradrenergic differentiation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Ellis, Kyle E.; Arthur, Jarvis J.; Nicholas, Stephanie N.; Kiggins, Daniel
2017-01-01
A Commercial Aviation Safety Team (CAST) study of 18 worldwide loss-of-control accidents and incidents determined that the lack of external visual references was associated with a flight crew's loss of attitude awareness or energy state awareness in 17 of these events. Therefore, CAST recommended development and implementation of virtual day-Visual Meteorological Condition (VMC) display systems, such as synthetic vision systems, which can promote flight crew attitude awareness similar to a day-VMC environment. This paper describes the results of a high-fidelity, large transport aircraft simulation experiment that evaluated virtual day-VMC displays and a "background attitude indicator" concept as an aid to pilots in recovery from unusual attitudes. Twelve commercial airline pilots performed multiple unusual attitude recoveries and both quantitative and qualitative dependent measures were collected. Experimental results and future research directions under this CAST initiative and the NASA "Technologies for Airplane State Awareness" research project are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NSTec Environmental Restoration
2008-08-01
This Post-Closure Inspection and Monitoring Report (PCIMR) provides the results of inspections and monitoring for Corrective Action Unit (CAU) 110, Area 3 WMD [Waste Management Division] U-3ax/bl Crater. This PCIMR includes an analysis and summary of the site inspections, repairs and maintenance, meteorological information, and soil moisture monitoring data obtained at CAU 110 for the period July 2007 through June 2008. Site inspections of the cover were performed quarterly to identify any significant changes to the site requiring action. The overall condition of the cover, perimeter fence, and use restriction (UR) warning signs was good. However, settling was observed thatmore » exceeded the action level as specified in Section VII.B.7 of the Hazardous Waste Permit Number NEV HW021 (Nevada Division of Environmental Protection, 2005). This permit states that cracks or settling greater than 15 centimeters (6 inches) deep that extend 1.0 meter (m) (3 feet [ft]) or more on the cover will be evaluated and repaired within 60 days of detection. Two areas of settling and cracks were observed on the south and east edges of the cover during the September 2007 inspection that exceeded the action level and required repair. The areas were repaired in October 2007. Additional settling and cracks were observed along the east side of the cover during the December 2007 inspection that exceeded the action level, and the area was repaired in January 2008. Significant animal burrows were also observed during the March 2008 inspection, and small mammal trapping and relocation was performed in April 2008. The semiannual subsidence surveys were performed in September 2007 and March 2008. No significant subsidence was observed in the survey data. Monument 5 shows the greatest amount of subsidence (-0.02 m [-0.08 ft] compared to the baseline survey of 2000). This amount is negligible and near the resolution of the survey instruments; it does not indicate that subsidence is occurring overall on the cover. Soil moisture results obtained to date indicate that the CAU 110 cover is performing well. Time Domain Reflectometry (TDR) data show regular changes in the shallow subsurface with significant rain events; however, major changes in volumetric moisture content (VMC) appear to be limited to 1.8 m (6 ft) below ground surface or shallower, depending on the location on the cover. At 2.4 m (8 ft) below the cover surface, TDR data show soil moisture content remained between 9 and 15 percent VMC, depending on the TDR location. The west portion of the cover tends to reflect a lower moisture content and less variability in annual fluctuations in moisture content at this depth. Results of soil moisture monitoring of the cover indicate that VMC at the compliance level (at 2.4 m [8 ft] below the cover surface) is approaching a steady state. If the moisture content at this level remains consistent with recent years, then a recommendation may be made for establishing compliance levels for future monitoring.« less
NASA Astrophysics Data System (ADS)
Ripepi, V.; Moretti, M. I.; Clementini, G.; Marconi, M.; Cioni, M. R.; Marquette, J. B.; Tisserand, P.
2012-09-01
The Vista Magellanic Cloud (VMC, PI M.R. Cioni) survey is collecting K S -band time series photometry of the system formed by the two Magellanic Clouds (MC) and the "bridge" that connects them. These data are used to build K S -band light curves of the MC RR Lyrae stars and Classical Cepheids and determine absolute distances and the 3D geometry of the whole system using the K-band period luminosity ( PLK S ), the period-luminosity-color ( PLC) and the Wesenhiet relations applicable to these types of variables. As an example of the survey potential we present results from the VMC observations of two fields centered respectively on the South Ecliptic Pole and the 30 Doradus star forming region of the Large Magellanic Cloud. The VMC K S -band light curves of the RR Lyrae stars in these two regions have very good photometric quality with typical errors for the individual data points in the range of ˜0.02 to 0.05 mag. The Cepheids have excellent light curves (typical errors of ˜0.01 mag). The average K S magnitudes derived for both types of variables were used to derive PLK S relations that are in general good agreement within the errors with the literature data, and show a smaller scatter than previous studies.
Pulsating stars in the VMC survey
NASA Astrophysics Data System (ADS)
Cioni, Maria-Rosa L.; Ripepi, Vincenzo; Clementini, Gisella; Groenewegen, Martin A. T.; Moretti, Maria I.; Muraveva, Tatiana; Subramanian, Smitha
2017-09-01
The VISTA survey of the Magellanic Clouds system (VMC) began observations in 2009 and since then, it has collected multi-epoch data at Ks and in addition multi-band data in Y and J for a wide range of stellar populations across the Magellanic system. Among them are pulsating variable stars: Cepheids, RR Lyrae, and asymptotic giant branch stars that represent useful tracers of the host system geometry. Based on observations made with VISTA at ESO under programme ID 179.B-2003.
Cheng, Henry; Reddy, Aneela; Sage, Andrew; Lu, Jinxiu; Garfinkel, Alan; Tintut, Yin; Demer, Linda L
2012-01-01
In embryogenesis, structural patterns, such as vascular branching, may form via a reaction-diffusion mechanism in which activator and inhibitor morphogens guide cells into periodic aggregates. We previously found that vascular mesenchymal cells (VMCs) spontaneously aggregate into nodular structures and that morphogen pairs regulate the aggregation into patterns of spots and stripes. To test the effect of a focal change in activator morphogen on VMC pattern formation, we created a focal zone of high cell density by plating a second VMC layer within a cloning ring over a confluent monolayer. After 24 h, the ring was removed and pattern formation monitored by phase-contrast microscopy. At days 2-8, the patterns progressed from uniform distributions to swirl, labyrinthine and spot patterns. Within the focal high-density zone (HDZ) and a narrow halo zone, cells aggregated into spot patterns, whilst in the outermost zone of the plate, cells formed a labyrinthine pattern. The area occupied by aggregates was significantly greater in the outermost zone than in the HDZ or halo. The rate of pattern progression within the HDZ increased as a function of its plating density. Thus, focal differences in cell density may drive pattern formation gradients in tissue architecture, such as vascular branching. Copyright © 2012 S. Karger AG, Basel.
Necroptosis may be a novel mechanism for cardiomyocyte death in acute myocarditis.
Zhou, Fei; Jiang, Xuejun; Teng, Lin; Yang, Jun; Ding, Jiawang; He, Chao
2018-05-01
In this study, we investigated the roles of RIP1/RIP3 mediated cardiomyocyte necroptosis in CVB3-induced acute myocarditis. Serum concentrations of creatinine kinase (CK), CK-MB, and cardiac troponin I were detected using a Hitachi Automatic Biochemical Analyzer in a mouse model of acute VMC. Histological changes in cardiac tissue were observed by light microscope and expression levels of RIP1/RIP3 in the cardiac tissue were detected via Western blot and immunohistochemistry. The data showed that RIP1/RIP3 was highly expressed in cardiomyocytes in the acute VMC mouse model and that the necroptosis pathway specific blocker, Nec-1, dramatically reduced the myocardial damage by downregulating the expression of RIP1/RIP3. These findings provide evidence that necroptosis plays a significant role in cardiomyocyte death and it is a major pathway for cell death in acute VMC. Blocking the necroptosis pathway may serve as a new therapeutic option for the treatment of acute viral myocarditis.
Botrel, L; Acqualagna, L; Blankertz, B; Kübler, A
2017-11-01
Brain computer interfaces (BCIs) allow for controlling devices through modulation of sensorimotor rhythms (SMR), yet a profound number of users is unable to achieve sufficient accuracy. Here, we investigated if visuo-motor coordination (VMC) training or Jacobsen's progressive muscle relaxation (PMR) prior to BCI use would increase later performance compared to a control group who performed a reading task (CG). Running the study in two different BCI-labs, we achieved a joint sample size of N=154 naïve participants. No significant effect of either intervention (VMC, PMR, control) was found on resulting BCI performance. Relaxation level and visuo-motor performance were associated with later BCI performance in one BCI-lab but not in the other. These mixed results do not indicate a strong potential of VMC or PMR for boosting performance. Yet further research with different training parameters or experimental designs is needed to complete the picture. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhu, Ming-jun; Wang, Guo-juan; Wang, Yong-xia; Pu, Jie-lin; Liu, Hong-jun; Yu, Hai-bin
2010-02-01
To study the effect of Xinjining extract (, XJN) on inward rectifier potassium current (I(K1)) in ventricular myocyte (VMC) of guinea pigs and its anti-arrhythmic mechanism on ion channel level. Single VMC was enzymatically isolated by zymolisis, and whole-cell patch clamp recording technique was used to record the I(k1) in VMC irrigated with XJN of different concentrations (1.25, 2.50, 5.00 g/L; six samples for each). The stable current and conductance of the inward component of I(K1) as well as the outward component of peak I(K1) and conductance of it accordingly was recorded when the test voltage was set on -110 mV. The suppressive rate of XJN on the inward component of I(K1) was 9.54% + or - 5.81%, 34.82% + or - 15.03%, and 59.52% + or - 25.58% with a concentration of 1.25, 2.50, and 5.00 g/L, respectively, and that for the outward component of peak I(K1) was 23.94% + or - 7.45%, 52.98% + or - 19.62%, and 71.42% + or - 23.01%, respectively (all P<0.05). Moreover, different concentrations of XJN also showed effects for reducing I(K1) conductance. XJN has inhibitory effect on I(K1) in guinea pig's VMC, and that of the same concentration shows stronger inhibition on outward component than on inward component, which may be one of the mechanisms of its anti-arrhythmic effect.
Aishima, Shinichi; Tanaka, Yuki; Kubo, Yuichiro; Shirabe, Ken; Maehara, Yoshihiko; Oda, Yoshinao
2014-11-01
Morphologic features and neoplastic potentials of bile duct adenoma (BDA) and von Meyenburg complex (VMC)-like duct arising in chronic liver disease were unknown. Thirty-five BDAs and 12 VMC-like duct lesions were observed in 39 cases with chronic liver disease. BDAs were divided into the EMA-cytoplasmic type (n = 14) and EMA-luminal type (n = 21). EMA-cytoplasmic BDA composed of a proliferation of cuboidal to low-columnar cells forming an open lumen with NCAM(+)/MUC6(-), resembling an interlobular bile duct. EMA-luminal BDA showed uniform cuboidal cells with narrow lumen, and NCAM(++)/MUC6(++), resembling a ductular reaction. VMC-like duct showed positive MUC1 expression and negative MUC6. The expression of S100P, glucose transporter-1 (GLUT-1) and insulin-like growth factor II mRNA-binding protein 3 (IMP-3) were not detected in three lesions. p16 expression was higher than those of the ductular reaction, and the Ki67 and p53 indexes were very low (<1.0%). Large-sized EMA-luminal BDA shows sclerotic stroma. We classified small nodular lesions of ductal or ductular cells in chronic hepatitis and cirrhosis into the following groups: BDA, interlobular bile duct type; BDA, ductular/peribiliary gland type; and VMC-like duct. They may be reactive proliferation rather than neoplastic lesions. © 2014 Japanese Society of Pathology and Wiley Publishing Asia Pty Ltd.
Venus Express - the First European Mission to Venus
NASA Astrophysics Data System (ADS)
Titov, D. V.; Svedhem, H.; Venus Express Team
2005-08-01
The ESA Venus Express mission is based on reuse of the Mars Express spacecraft and the payload available from the Mars Express and Rosetta missions. In less than 3 years the spacecraft was rebuilt with modifications to cope with harsh environment at Venus and fully tested. The Venus Express will be launched in the end of October 2005 from Baykonur (Kazakhstan) by the Russian Sojuz-Fregat rocket. In the beginning of April 2006 the spacecraft will be inserted in a polar orbit around Venus with pericenter of 250 km and apocentre of 66,000 km and a period of 24 hours. The planned mission duration is two Venus sidereal days ( 500 Earth days) with possibility to extend the mission for two more Venus days. The Venus Express aims at a global investigation of the Venus atmosphere and the plasma environment, and addresses some important aspects of the surface physics. The science goals comprise investigation of the atmospheric structure and composition, cloud layer and hazes, global circulation and radiative balance, plasma and escape processes, and surface properties. These topics will be addressed by seven instruments onboard the satellite: Analyzer of Space Plasma (ASPERA), Magnetometer (MAG), IR Fourier spectrometer (PFS), spectrometer for solar and stellar occultation (SPICAV), radio science experiment (VeRa), visible and IR imaging spectrometer (VIRTIS), and Venus Monitoring Camera (VMC). Scientific operations will include observations in pericentre, off-pericentre and apocentre sessions, limb scans, solar and stellar occultation, radio occultation, bi-static radar, and solar corona sounding.
Aviation spatial orientation in relationship to head position and attitude interpretation.
Patterson, F R; Cacioppo, A J; Gallimore, J J; Hinman, G E; Nalepka, J P
1997-06-01
Conventional wisdom describing aviation spatial awareness assumes that pilots view a moving horizon through the windscreen. This assumption presupposes head alignment with the cockpit "Z" axis during both visual (VMC) and instrument (IMC) maneuvers. Even though this visual paradigm is widely accepted, its accuracy has not been verified. The purpose of this research was to determine if a visually induced neck reflex causes pilots to align their heads toward the horizon, rather than the cockpit vertical axis. Based on literature describing reflexive head orientation in terrestrial environments it was hypothesized that during simulated VMC aircraft maneuvers, pilots would align their heads toward the horizon. Some 14 military pilots completed two simulated flights in a stationary dome simulator. The flight profile consisted of five separate tasks, four of which evaluated head tilt during exposure to unique visual conditions and one examined occurrences of disorientation during unusual attitude recovery. During simulated visual flight maneuvers, pilots tilted their heads toward the horizon (p < 0.0001). Under IMC, pilots maintained head alignment with the vertical axis of the aircraft. During VMC maneuvers pilots reflexively tilt their heads toward the horizon, away from the Gz axis of the cockpit. Presumably, this behavior stabilizes the retinal image of the horizon (1 degree visual-spatial cue), against which peripheral images of the cockpit (2 degrees visual-spatial cue) appear to move. Spatial disorientation, airsickness, and control reversal error may be related to shifts in visual-vestibular sensory alignment during visual transitions between VMC (head tilt) and IMC (Gz head stabilized) conditions.
Performance of quantum Monte Carlo for calculating molecular bond lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au
2016-03-28
This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF.more » The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.« less
Sharafetdinov, Kh Kh; Plotnikova, O A; Zykina, V V; Mal'tsev, G Iu; Sokol'nikov, A A; Kaganov, B S
2011-01-01
Addition of a vitamin-mineral complex (VMC) to a standard hypocaloric diet leads to a positive dynamics of antropometric characteristics in patients with obesity 1st and 2nd degrees which is comparable to effectiveness of standard dietotherapy (dietary treatment) traditionally used in complex treatment of obesity. Addition of 1,8 mg of vitamin B2 as part of VMC to a hypocaloric diet is shown to be inadequate in eradication of marginal provision of riboflavin when using diets reduced in calories.
NASA Astrophysics Data System (ADS)
Subramanian, Smitha; Rubele, Stefano; Sun, Ning-Chen; Girardi, Léo; de Grijs, Richard; van Loon, Jacco Th.; Cioni, Maria-Rosa L.; Piatti, Andrés E.; Bekki, Kenji; Emerson, Jim; Ivanov, Valentin D.; Kerber, Leandro; Marconi, Marcella; Ripepi, Vincenzo; Tatton, Benjamin L.
2017-05-01
We study the luminosity function of intermediate-age red clump stars using deep, near-infrared photometric data covering ˜20 deg2 located throughout the central part of the Small Magellanic Cloud (SMC), comprising the main body and the galaxy's eastern wing, based on observations obtained with the VISTA Survey of the Magellanic Clouds (VMC). We identified regions that show a foreground population (˜11.8 ± 2.0 kpc in front of the main body) in the form of a distance bimodality in the red clump distribution. The most likely explanation for the origin of this feature is tidal stripping from the SMC rather than the extended stellar haloes of the Magellanic Clouds and/or tidally stripped stars from the Large Magellanic Cloud. The homogeneous and continuous VMC data trace this feature in the direction of the Magellanic Bridge and, particularly, identify (for the first time) the inner region (˜2-2.5 kpc from the centre) from where the signatures of interactions start becoming evident. This result provides observational evidence of the formation of the Magellanic Bridge from tidally stripped material from the SMC.
Effectively Transforming IMC Flight into VMC Flight: An SVS Case Study
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Hughes, Monic F.; Parrish, Russell V.; Takallu, Mohammad A.
2006-01-01
A flight-test experiment was conducted using the NASA LaRC Cessna 206 aircraft. Four primary flight and navigation display concepts, including baseline and Synthetic Vision System (SVS) concepts, were evaluated in the local area of Roanoke Virginia Airport, flying visual and instrument approach procedures. A total of 19 pilots, from 3 pilot groups reflecting the diverse piloting skills of the GA population, served as evaluation pilots. Multi-variable Discriminant Analysis was applied to three carefully selected and markedly different operating conditions with conventional instrumentation to provide an extension of traditional analysis methods as well as provide an assessment of the effectiveness of SVS displays to effectively transform IMC flight into VMC flight.
Wang, Dan; Chen, Yiming; Jiang, Jianbin; Zhou, Aihua; Pan, Lulu; Chen, Qi; Qian, Yan; Chu, Maoping; Chen, Chao
2014-09-01
This study aims to compare the effects of carvedilol and metoprolol in alleviating viral myocarditis (VMC) induced by coxsackievirus B3 (CVB3) in mice. A total of 116 Balb/c mice were included in this study. Ninety-six mice were inoculated intraperitoneally with CVB3 to induce VMC. The CVB3 inoculated mice were evenly divided into myocarditis group (n=32), carvedilol group (n=32) and metoprolol group (n=32). Twenty mice (control group) were inoculated intraperitoneally with normal saline. Hematoxylin and eosin staining and histopathologic scoring were used to investigate the effects of carvedilol and metoprolol on myocardial histopathologic changes on days 3 and 5. In addition, serum cTn-I levels, cytokine levels and virus titers were determined using chemiluminescence immunoassay, enzyme-linked immunosorbent assay and plaque assay, respectively, on days 3 and 5. Finally, the levels of phosphorylated p38MAPK were studied using immunohistochemical staining and Western blotting on day 5. Carvedilol had a stronger effect than metoprolol in reducing the pathological scores of VMC induced by CVB3. Both carvedilol and metoprolol reduced the levels of cTn-I, but the effect of carvedilol was stronger. Carvedilol and metoprolol decreased the levels of myocardial pro-inflammatory cytokines and increased the expression of anti-inflammatory cytokine, with the effects of carvedilol being stronger than those of metoprolol. Carvedilol had a stronger effect in reducing myocardial virus concentration compared with metoprolol. Carvedilol was stronger than metoprolol in decreasing the levels of myocardial phosphorylated p38MAPK. In conclusion, carvedilol was more potent than metoprolol in ameliorating myocardial lesions in VMC, probably due to its stronger modulation of the balance between pro- and anti-inflammatory cytokines by inhibiting the activation of p38MAPK pathway through β1- and β2-adrenoreceptors. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Niederhofer, Florian; Cioni, Maria-Rosa L.; Rubele, Stefano; Schmidt, Thomas; Bekki, Kenji; de Grijs, Richard; Emerson, Jim; Ivanov, Valentin D.; Oliveira, Joana M.; Petr-Gotzens, Monika G.; Ripepi, Vincenzo; Sun, Ning-Chen; van Loon, Jacco Th.
2018-05-01
We use deep multi-epoch point-spread function (PSF) photometry taken with the Visible and Infrared Survey Telescope for Astronomy (VISTA) to measure and analyze the proper motions of stars within the Galactic globular cluster 47 Tucanae (47 Tuc, NGC 104). The observations are part of the ongoing near-infrared VISTA survey of the Magellanic Cloud system (VMC). The data analyzed in this study correspond to one VMC tile, which covers a total sky area of 1.77 deg2. Absolute proper motions with respect to 9070 background galaxies are calculated from a linear regression model applied to the positions of stars in 11 epochs in the Ks filter. The data extend over a total time baseline of about 17 months. We found an overall median proper motion of the stars within 47 Tuc of (μαcos(δ), μδ) = (+5.89 ± 0.02 (statistical) ± 0.13 (systematic), -2.14 ± 0.02 (statistical) ± 0.08 (systematic)) mas yr-1, based on the measurements of 35 000 individual sources between 5' and 42' from the cluster center. We compared our result to the proper motions from the newest US Naval Observatory CCD Astrograph Catalog (UCAC5), which includes data from the Gaia data release 1. Selecting cluster members ( 2700 stars), we found a median proper motion of (μαcos(δ), μδ) = (+5.30 ± 0.03 (statistical) ± 0.70 (systematic), -2.70 ± 0.03 (statistical) ± 0.70 (systematic)) mas yr-1. Comparing the results with measurements in the literature, we found that the values derived from the VMC data are consistent with the UCAC5 result, and are close to measurements obtained using the Hubble Space Telescope. We combined our proper motion results with radial velocity measurements from the literature and reconstructed the orbit of 47 Tuc, finding that the cluster is on an orbit with a low ellipticity and is confined within the inner 7.5 kpc of the Galaxy. We show that the use of an increased time baseline in combination with PSF-determined stellar centroids in crowded regions significantly improves the accuracy of the method. In future works, we will apply the methods described here to more VMC tiles to study in detail the kinematics of the Magellanic Clouds. Based on observations made with VISTA at the Paranal Observatory under program ID 179.B-2003.
NASA Astrophysics Data System (ADS)
Bertaux, Jean-Loup; Khatunstsev, Igor; Hauchecorne, Alain; Markiewicz, Wojtek; Emmanuel, Marcq; Sébastien, Lebonnois; Marina, Patsaeva; Alex, Turin; Anna, Fedorova
2016-04-01
Based on the analysis of UV images (at 365 nm) of Venus cloud top (altitude 67±2 km) collected with VMC (Venus Monitoring Camera) on board Venus Express (VEX), it is found that the zonal wind speed south of the equator (from 5°S to 15°s) shows a conspicuous variation (from -101 to -83 m/s) with geographic longitude of Venus, correlated with the underlying relief of Aphrodite Terra. We interpret this pattern as the result of stationary gravity waves produced at ground level by the up lift of air when the horizontal wind encounters a mountain slope. These waves can propagate up to cloud top level, break there and transfer their momentum to the zonal flow. Such upward propagation of gravity waves and influence on the wind speed vertical profile was shown to play an important role in the middle atmosphere of the Earth by Lindzen [1981], but is not reproduced in a current GCM of Venus atmosphere. Consistent with present findings, the two VEGA mission balloons experienced a small, but significant, difference of westward velocity, at their 53 km floating altitude. The albedo at 365 nm varies also with longitude and latitude in a pattern strikingly similar in the low latitude regions to a recent map of cloud top H2O [Fedorova et al., 2015], in which a lower UV albedo is correlated with increased H2O. We argue that H2O enhancement is the sign of upwelling, suggesting that the UV absorber is also brought to cloud top by upwelling.
The VMC survey - XXVI. Structure of the Small Magellanic Cloud from RR Lyrae stars
NASA Astrophysics Data System (ADS)
Muraveva, T.; Subramanian, S.; Clementini, G.; Cioni, M.-R. L.; Palmer, M.; van Loon, J. Th.; Moretti, M. I.; de Grijs, R.; Molinaro, R.; Ripepi, V.; Marconi, M.; Emerson, J.; Ivanov, V. D.
2018-01-01
We present results from the analysis of 2997 fundamental mode RR Lyrae variables located in the Small Magellanic Cloud (SMC). For these objects, near-infrared time series photometry from the VISTA survey of the Magellanic Clouds system (VMC) and visual light curves from the OGLE IV (Optical Gravitational Lensing Experiment IV) survey are available. In this study, the multi-epoch Ks-band VMC photometry was used for the first time to derive intensity-averaged magnitudes of the SMC RR Lyrae stars. We determined individual distances to the RR Lyrae stars from the near-infrared period-absolute magnitude-metallicity (PM_{K_s}Z) relation, which has some advantages in comparison with the visual absolute magnitude-metallicity (MV-[Fe/H]) relation, such as a smaller dependence of the luminosity on interstellar extinction, evolutionary effects and metallicity. The distances we have obtained were used to study the three-dimensional structure of the SMC. The distribution of the SMC RR Lyrae stars is found to be ellipsoidal. The actual line-of-sight depth of the SMC is in the range 1-10 kpc, with an average depth of 4.3 ± 1.0 kpc. We found that RR Lyrae stars in the eastern part of the SMC are affected by interactions of the Magellanic Clouds. However, we do not see a clear bimodality observed for red clump stars, in the distribution of RR Lyrae stars.
2013-01-01
Background Robot-aided gait training is an emerging clinical tool for gait rehabilitation of neurological patients. This paper deals with a novel method of offering gait assistance, using an impedance controlled exoskeleton (LOPES). The provided assistance is based on a recent finding that, in the control of walking, different modules can be discerned that are associated with different subtasks. In this study, a Virtual Model Controller (VMC) for supporting one of these subtasks, namely the foot clearance, is presented and evaluated. Methods The developed VMC provides virtual support at the ankle, to increase foot clearance. Therefore, we first developed a new method to derive reference trajectories of the ankle position. These trajectories consist of splines between key events, which are dependent on walking speed and body height. Subsequently, the VMC was evaluated in twelve healthy subjects and six chronic stroke survivors. The impedance levels, of the support, were altered between trials to investigate whether the controller allowed gradual and selective support. Additionally, an adaptive algorithm was tested, that automatically shaped the amount of support to the subjects’ needs. Catch trials were introduced to determine whether the subjects tended to rely on the support. We also assessed the additional value of providing visual feedback. Results With the VMC, the step height could be selectively and gradually influenced. The adaptive algorithm clearly shaped the support level to the specific needs of every stroke survivor. The provided support did not result in reliance on the support for both groups. All healthy subjects and most patients were able to utilize the visual feedback to increase their active participation. Conclusion The presented approach can provide selective control on one of the essential subtasks of walking. This module is the first in a set of modules to control all subtasks. This enables the therapist to focus the support on the subtasks that are impaired, and leave the other subtasks up to the patient, encouraging him to participate more actively in the training. Additionally, the speed-dependent reference patterns provide the therapist with the tools to easily adapt the treadmill speed to the capabilities and progress of the patient. PMID:23336754
DOT National Transportation Integrated Search
2015-08-01
Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...
14 CFR 23.1563 - Airspeed placards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... multiengine-powered airplanes of more than 6,000 pounds maximum weight, and turbine engine-powered airplanes, the maximum value of the minimum control speed, VMC (one-engine-inoperative) determined under § 23.149...
14 CFR 23.1563 - Airspeed placards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... multiengine-powered airplanes of more than 6,000 pounds maximum weight, and turbine engine-powered airplanes, the maximum value of the minimum control speed, VMC (one-engine-inoperative) determined under § 23.149...
Phenology cameras observing boreal ecosystems of Finland
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali
2016-04-01
Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.
Differences Between a Single- and a Double-Folding Nucleus-^{9}Be Optical Potential
NASA Astrophysics Data System (ADS)
Bonaccorso, A.; Carstoiu, F.; Charity, R. J.; Kumar, R.; Salvioni, G.
2016-05-01
We have recently constructed two very successful n-^9Be optical potentials (Bonaccorso and Charity in Phys Rev C89:024619, 2014). One by the Dispersive Optical Model (DOM) method and the other (AB) fully phenomenological. The two potentials have strong surface terms in common for both the real and the imaginary parts. This feature makes them particularly suitable to build a single-folded (light-) nucleus-^9Be optical potential by using ab-initio projectile densities such as those obtained with the VMC method (Wiringa http://www.phy.anl.gov/theory/research/density/). On the other hand, a VMC density together with experimental nucleon-nucleon cross-sections can be used also to obtain a neutron and/or proton-^9Be imaginary folding potential. We will use here an ab-initio VMC density (Wiringa http://www.phy.anl.gov/theory/research/density/) to obtain both a n-^9Be single-folded potential and a nucleus-nucleus double-folded potential. In this work we report on the cases of ^8B, ^8Li and ^8C projectiles. Our approach could be the basis for a systematic study of optical potentials for light exotic nuclei scattering on such light targets. Some of the projectiles studied are cores of other exotic nuclei for which neutron knockout has been used to extract spectroscopic information. For those cases, our study will serve to make a quantitative assessment of the core-target part of the reaction description, in particular its localization.
New generation of meteorology cameras
NASA Astrophysics Data System (ADS)
Janout, Petr; Blažek, Martin; Páta, Petr
2017-12-01
A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.
NASA Astrophysics Data System (ADS)
Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah
2018-03-01
The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.
Phosphorus Segregation in Meta-Rapidly Solidified Carbon Steels
NASA Astrophysics Data System (ADS)
Li, Na; Qiao, Jun; Zhang, Junwei; Sha, Minghong; Li, Shengli
2017-09-01
Twin-roll strip casters for near-net-shape manufacture of steels have received increased attention in the steel industry. Although negative segregation of phosphorus occurred in twin-roll strip casting (TRSC) steels in our prior work, its mechanism is still unclear. In this work, V-shaped molds were designed and used to simulate a meta-rapid solidification process without roll separating force during twin roll casting of carbon steels. Experimental results show that no obvious phosphorus segregation exist in the V-shaped mold casting (VMC) steels. By comparing TRSC and the VMC, it is proposed that the negative phosphorus segregation during TRSC results from phosphorus redistribution driven by recirculating and vortex flow in the molten pool. Meanwhile, solute atoms near the advancing interface are overtaken and incorporated into the solid because of the high solidification speed. The high rolling force could promote the negative segregation of alloying elements in TRSC.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
Low, slow, small target recognition based on spatial vision network
NASA Astrophysics Data System (ADS)
Cheng, Zhao; Guo, Pei; Qi, Xin
2018-03-01
Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.
Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.
Song, Kai-Tai; Tai, Jen-Chao
2006-10-01
Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NSTec Environmental Restoration
2006-08-01
This Post-Closure Inspection and Monitoring Report (PCIMR) provides the results of inspections and monitoring for Corrective Action Unit (CAU) 110, Area 3 WMD [Waste Management Division] U-3ax/bl Crater. This PCIMR includes an analysis and summary of the site inspections, repairs and maintenance, meteorological information, and soil moisture monitoring data obtained at CAU 110, for the annual period July 2005 through June 2006. Site inspections of the cover were performed quarterly to identify any significant changes to the site requiring action. The overall condition of the cover, cover vegetation, perimeter fence, and UR warning signs was good. Settling was observed thatmore » exceeded the action level as specified in Section VILB.7 of the Hazardous Waste Permit Number NEV HW009 (Nevada Division of Environmental Protection, 2000). This permit states that cracks or settling greater than 15 centimeters (6 inches) deep that extend 1.0 meter (m) (3 feet [ft]) or more on the cover will be evaluated and repaired within 60 days of detection. Along the east edge of the cover (repaired previously in August 2003, December 2003, May 2004, October 2004), an area of settling was observed during the December 2005 inspection to again be above the action level, and required repair. This area and two other areas of settling on the cover that were first observed during the December 2005 inspection were repaired in February 2006. The semiannual subsidence surveys were done in September 2005 and March 2006. No significant subsidence was observed in the survey data. Monument 5 shows the greatest amount of subsidence (-0.015 m [-0.05 ft] compared to the baseline survey of 2000). This amount is negligible and near the resolution of the survey instruments; it does not indicate that subsidence is occurring on the cover. Soil moisture results obtained to date indicate that the CAU 110 cover is performing as expected. Time Domain Reflectometry (TDR) data indicated an increase in soil moisture (1 to 3% VMC change) at a depth of 1.8 m (6 ft.) due to the exceptionally heavy precipitation from the January and February 2005 precipitation events. The moisture profile returned to baseline conditions by October 2005. At 2.4 m (8 ft) below the cover surface, TDR data show soil moisture content remained between 10 and 13 percent VMC. Considering the heavy precipitation experience in this and the previous reporting period, a compliance level will be established when the system reaches a steady state and equilibrium has been established.« less
Halofuginone alleviates acute viral myocarditis in suckling BALB/c mice by inhibiting TGF-β1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiao-Hua; Fu, Jia; Sun, Da-Qing, E-mail: daqingsuncd@163.com
2016-04-29
Viral myocarditis (VMC) is an inflammation of heart muscle in infants and young adolescents. This study explored the function of halofuginone (HF) in Coxsackievirus B3 (CVB3) -treated suckling mice. HF-treated animal exhibited higher survival rate, lower heart/body weight, and more decreased blood sugar concentration than CVB3 group. HF also reduced the expressions of interleukin(IL)-17 and IL-23 and the numbers of Th17 cells. Moreover, HF downregulated pro-inflammatory cytokine levels and increased anti-inflammatory cytokine levels. The expressions of transforming growth factor(TGF-β1) and nuclear factor kappa-light-chain-enhancer of activated B (NF-κB) p65/ tumor necrosis factor-α (TNF-α) proteins were decreased by HF as well. Finally,more » the overexpression of TGF-β1 counteracted the protection effect of HF in CVB3-treated suckling mice. In summary, our study suggests HF increases the survival of CVB3 suckling mice, reduces the Th17 cells and pro-inflammatory cytokine levels, and may through downregulation of the TGF-β1-mediated expression of NF-κB p65/TNF-α pathway proteins. These results offer a potential therapeutic strategy for the treatment of VMC. - Highlights: • Halofuginone (HF) increases the survival of suckling BALB/c mice infected with acute CVB3. • HF reduces the expression of Th17 cell markers (IL-17 and IL-23) and the number of CD4{sup +} IL17{sup +} cells. • Pro-inflammatory cytokines levels associated with myocarditis were reduced by HF in CVB3-treated suckling mice. • HF alleviates VMC via inhibition of TGF-β1-mediated NF-κB p65/TNF-α pathway.« less
Miller, Justin Robert; Neumueller, Suzanne; Muere, Clarissa; Olesiak, Samantha; Pan, Lawrence; Hodges, Matthew R.
2013-01-01
A current and major unanswered question is why the highly sensitive central CO2/H+ chemoreceptors do not prevent hypoventilation-induced hypercapnia following carotid body denervation (CBD). Because perturbations involving the carotid bodies affect central neuromodulator and/or neurotransmitter levels within the respiratory network, we tested the hypothesis that after CBD there is an increase in inhibitory and/or a decrease in excitatory neurochemicals within the ventrolateral medullary column (VMC) in awake goats. Microtubules for chronic use were implanted bilaterally in the VMC within or near the pre-Bötzinger Complex (preBötC) through which mock cerebrospinal fluid (mCSF) was dialyzed. Effluent mCSF was collected and analyzed for neurochemical content. The goats hypoventilated (peak +22.3 ± 3.4 mmHg PaCO2) and exhibited a reduced CO2 chemoreflex (nadir, 34.8 ± 7.4% of control ΔV̇E/ΔPaCO2) after CBD with significant but limited recovery over 30 days post-CBD. After CBD, GABA and glycine were above pre-CBD levels (266 ± 29% and 189 ± 25% of pre-CBD; P < 0.05), and glutamine and dopamine were significantly below pre-CBD levels (P < 0.05). Serotonin, substance P, and epinephrine were variable but not significantly (P > 0.05) different from control after CBD. Analyses of brainstem tissues collected 30 days after CBD exhibited 1) a midline raphe-specific reduction (P < 0.05) in the percentage of tryptophan hydroxylase–expressing neurons, and 2) a reduction (P < 0.05) in serotonin transporter density in five medullary respiratory nuclei. We conclude that after CBD, an increase in inhibitory neurotransmitters and a decrease in excitatory neuromodulation within the VMC/preBötC likely contribute to the hypoventilation and attenuated ventilatory CO2 chemoreflex. PMID:23869058
The VMC Survey - XIII. Type II Cepheids in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Ripepi, V.; Moretti, M. I.; Marconi, M.; Clementini, G.; Cioni, M.-R. L.; de Grijs, R.; Emerson, J. P.; Groenewegen, M. A. T.; Ivanov, V. D.; Muraveva, T.; Piatti, A. E.; Subramanian, S.
2015-01-01
The VISTA (Visible and Infrared Survey Telescope for Astronomy) survey of the Magellanic Clouds System (VMC) is collecting deep Ks-band time-series photometry of the pulsating variable stars hosted in the system formed by the two Magellanic Clouds and the Bridge connecting them. In this paper, we have analysed a sample of 130 Large Magellanic Cloud (LMC) Type II Cepheids (T2CEPs) found in tiles with complete or near-complete VMC observations for which identification and optical magnitudes were obtained from the OGLE III (Optical Gravitational Lensing Experiment) survey. We present J and Ks light curves for all 130 pulsators, including 41 BL Her, 62 W Vir (12 pW Vir) and 27 RV Tau variables. We complement our near-infrared photometry with the V magnitudes from the OGLE III survey, allowing us to build a variety of period-luminosity (PL), period-luminosity-colour (PLC) and period-Wesenheit (PW) relationships, including any combination of the V, J, Ks filters and valid for BL Her and W Vir classes. These relationships were calibrated in terms of the LMC distance modulus, while an independent absolute calibration of the PL(Ks) and the PW(Ks, V) was derived on the basis of distances obtained from Hubble Space Telescope parallaxes and Baade-Wesselink technique. When applied to the LMC and to the Galactic globular clusters hosting T2CEPs, these relations seem to show that (1) the two Population II standard candles RR Lyrae and T2CEPs give results in excellent agreement with each other; (2) there is a discrepancy of ˜0.1 mag between Population II standard candles and classical Cepheids when the distances are gauged in a similar way for all the quoted pulsators. However, given the uncertainties, this discrepancy is within the formal 1σ uncertainties.
Ferretti, A; Martignano, A; Simonato, F; Paiusco, M
2014-02-01
The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium". Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Marconi, M.; Molinaro, R.; Ripepi, V.; Cioni, M.-R. L.; Clementini, G.; Moretti, M. I.; Ragosta, F.; de Grijs, R.; Groenewegen, M. A. T.; Ivanov, V. D.
2017-04-01
We present the results of the χ2 minimization model fitting technique applied to optical and near-infrared photometric and radial velocity data for a sample of nine fundamental and three first overtone classical Cepheids in the Small Magellanic Cloud (SMC). The near-infrared photometry (JK filters) was obtained by the European Southern Observatory (ESO) public survey 'VISTA near-infrared Y, J, Ks survey of the Magellanic Clouds system' (VMC). For each pulsator, isoperiodic model sequences have been computed by adopting a non-linear convective hydrodynamical code in order to reproduce the multifilter light and (when available) radial velocity curve amplitudes and morphological details. The inferred individual distances provide an intrinsic mean value for the SMC distance modulus of 19.01 mag and a standard deviation of 0.08 mag, in agreement with the literature. Moreover, the intrinsic masses and luminosities of the best-fitting model show that all these pulsators are brighter than the canonical evolutionary mass-luminosity relation (MLR), suggesting a significant efficiency of core overshooting and/or mass-loss. Assuming that the inferred deviation from the canonical MLR is only due to mass-loss, we derive the expected distribution of percentage mass-loss as a function of both the pulsation period and the canonical stellar mass. Finally, a good agreement is found between the predicted mean radii and current period-radius (PR) relations in the SMC available in the literature. The results of this investigation support the predictive capabilities of the adopted theoretical scenario and pave the way for the application to other extensive data bases at various chemical compositions, including the VMC Large Magellanic Cloud pulsators and Galactic Cepheids with Gaia parallaxes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chaoli; Li, Chengyuan; De Grijs, Richard
2015-12-20
We use near-infrared observations obtained as part of the Visible and Infrared Survey Telescope for Astronomy (VISTA) Survey of the Magellanic Clouds (VMC), as well as two complementary Hubble Space Telescope (HST) data sets, to study the luminosity and mass functions (MFs) as a function of clustercentric radius of the main-sequence stars in the Galactic globular cluster 47 Tucanae. The HST observations indicate a relative deficit in the numbers of faint stars in the central region of the cluster compared with its periphery, for 18.75 ≤ m{sub F606W} ≤ 20.9 mag (corresponding to a stellar mass range of 0.55 < m{sub *}/M{sub ⊙} < 0.73). The stellar numbermore » counts at 6.′7 from the cluster core show a deficit for 17.62 ≤ m{sub F606W} ≤ 19.7 mag (i.e., 0.65 < m{sub *}/M{sub ⊙} < 0.82), which is consistent with expectations from mass segregation. The VMC-based stellar MFs exhibit power-law shapes for masses in the range 0.55 < m{sub *}/M{sub ⊙} < 0.82. These power laws are characterized by an almost constant slope, α. The radial distribution of the power-law slopes α thus shows evidence of the importance of both mass segregation and tidal stripping, for both the first- and second-generation stars in 47 Tuc.« less
NASA Astrophysics Data System (ADS)
Rubele, Stefano; Pastorelli, Giada; Girardi, Léo; Cioni, Maria-Rosa L.; Zaggia, Simone; Marigo, Paola; Bekki, Kenji; Bressan, Alessandro; Clementini, Gisella; de Grijs, Richard; Emerson, Jim; Groenewegen, Martin A. T.; Ivanov, Valentin D.; Muraveva, Tatiana; Nanni, Ambra; Oliveira, Joana M.; Ripepi, Vincenzo; Sun, Ning-Chen; van Loon, Jacco Th
2018-05-01
We recover the spatially resolved star formation history across the entire main body and Wing of the Small Magellanic Cloud (SMC), using fourteen deep tile images from the VISTA survey of the Magellanic Clouds (VMC), in the YJK_s filters. The analysis is performed on 168 subregions of size 0.143 deg2, covering a total contiguous area of 23.57 deg2. We apply a colour-magnitude diagram (CMD) reconstruction method that returns the best-fitting star formation rate SFR(t), age-metallicity relation, distance and mean reddening, together with their confidence intervals, for each subregion. With respect to previous analyses, we use a far larger set of VMC data, updated stellar models, and fit the two available CMDs (Y-K_s versus K_s and J-K_s versus K_s) independently. The results allow us to derive a more complete and more reliable picture of how the mean distances, extinction values, star formation rate, and metallicities vary across the SMC, and provide a better description of the populations that form its Bar and Wing. We conclude that the SMC has formed a total mass of (5.31 ± 0.05) × 108 M⊙ in stars over its lifetime. About two thirds of this mass is expected to be still locked in stars and stellar remnants. 50 per cent of the mass was formed prior to an age of 6.3 Gyr, and 80 per cent was formed between 8 and 3.5 Gyr ago. We also illustrate the likely distribution of stellar ages and metallicities in different parts of the CMD, to aid the interpretation of data from future astrometric and spectroscopic surveys of the SMC.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimalmore » VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.« less
Optical Transient Monitor (OTM) for BOOTES Project
NASA Astrophysics Data System (ADS)
Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.
2003-04-01
The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1990-01-01
Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
Improved docking alignment system
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1988-01-01
Improved techniques are provided for the alignment of two objects. The present invention is particularly suited for 3-D translation and 3-D rotational alignment of objects in outer space. A camera is affixed to one object, such as a remote manipulator arm of the spacecraft, while the planar reflective surface is affixed to the other object, such as a grapple fixture. A monitor displays in real-time images from the camera such that the monitor displays both the reflected image of the camera and visible marking on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors
Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.
2017-01-01
Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.
14 CFR 23.1563 - Airspeed placards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... multiengine-powered airplanes of more than 6,000 pounds maximum weight, and turbine engine-powered airplanes, the maximum value of the minimum control speed, VMC (one-engine-inoperative) determined under § 23.149... control and the airspeed indicator has features such as low speed awareness that provide ample warning...
14 CFR 23.1563 - Airspeed placards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... multiengine-powered airplanes of more than 6,000 pounds maximum weight, and turbine engine-powered airplanes, the maximum value of the minimum control speed, VMC (one-engine-inoperative) determined under § 23.149... control and the airspeed indicator has features such as low speed awareness that provide ample warning...
Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko
2006-01-01
The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
14 CFR 23.1563 - Airspeed placards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... than 6,000 pounds maximum weight, and turbine engine-powered airplanes, the maximum value of the minimum control speed, VMC (one-engine-inoperative) determined under § 23.149(b). [Amdt. 23-7, 34 FR 13097... lighted area such as the landing gear control and the airspeed indicator has features such as low speed...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.
In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10x10 and 40x40 cm{sup 2} field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections,more » (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe-Heitler) was used as the ''base line'' for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were {approx}935 ({approx}111 min on a single 2.6 GHz processor) and {approx}200 ({approx}45 min on a single processor) for the 10x10 field size with 50 million histories and 40x40 cm{sup 2} field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.« less
Comparative analysis of three different methods for monitoring the use of green bridges by wildlife.
Gužvica, Goran; Bošnjak, Ivana; Bielen, Ana; Babić, Danijel; Radanović-Gužvica, Biserka; Šver, Lidija
2014-01-01
Green bridges are used to decrease highly negative impact of roads/highways on wildlife populations and their effectiveness is evaluated by various monitoring methods. Based on the 3-year monitoring of four Croatian green bridges, we compared the effectiveness of three indirect monitoring methods: track-pads, camera traps and active infrared (IR) trail monitoring system. The ability of the methods to detect different species and to give good estimation of number of animal crossings was analyzed. The accuracy of species detection by track-pad method was influenced by granulometric composition of track-pad material, with the best results obtained with higher percentage of silt and clay. We compared the species composition determined by track-pad and camera trap methods and found that monitoring by tracks underestimated the ratio of small canids, while camera traps underestimated the ratio of roe deer. Regarding total number of recorder events, active IR detectors recorded from 11 to 19 times more events then camera traps and app. 80% of them were not caused by animal crossings. Camera trap method underestimated the real number of total events. Therefore, an algorithm for filtration of the IR dataset was developed for approximation of the real number of crossings. Presented results are valuable for future monitoring of wildlife crossings in Croatia and elsewhere, since advantages and disadvantages of used monitoring methods are shown. In conclusion, different methods should be chosen/combined depending on the aims of the particular monitoring study.
Sopori, Bhushan; Rupnowski, Przemyslaw; Ulsh, Michael
2016-01-12
A monitoring system 100 comprising a material transport system 104 providing for the transportation of a substantially planar material 102, 107 through the monitoring zone 103 of the monitoring system 100. The system 100 also includes a line camera 106 positioned to obtain multiple line images across a width of the material 102, 107 as it is transported through the monitoring zone 103. The system 100 further includes an illumination source 108 providing for the illumination of the material 102, 107 transported through the monitoring zone 103 such that light reflected in a direction normal to the substantially planar surface of the material 102, 107 is detected by the line camera 106. A data processing system 110 is also provided in digital communication with the line camera 106. The data processing system 110 is configured to receive data output from the line camera 106 and further configured to calculate and provide substantially contemporaneous information relating to a quality parameter of the material 102, 107. Also disclosed are methods of monitoring a quality parameter of a material.
Yang, Hualei; Yang, Xi; Heskel, Mary; Sun, Shucun; Tang, Jianwu
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporal resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). We found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.
Feasibility study of a gamma camera for monitoring nuclear materials in the PRIDE facility
NASA Astrophysics Data System (ADS)
Jo, Woo Jin; Kim, Hyun-Il; An, Su Jung; Lee, Chae Young; Song, Han-Kyeol; Chung, Yong Hyun; Shin, Hee-Sung; Ahn, Seong-Kyu; Park, Se-Hwan
2014-05-01
The Korea Atomic Energy Research Institute (KAERI) has been developing pyroprocessing technology, in which actinides are recovered together with plutonium. There is no pure plutonium stream in the process, so it has an advantage of proliferation resistance. Tracking and monitoring of nuclear materials through the pyroprocess can significantly improve the transparency of the operation and safeguards. An inactive engineering-scale integrated pyroprocess facility, which is the PyRoprocess Integrated inactive DEmonstration (PRIDE) facility, was constructed to demonstrate engineering-scale processes and the integration of each unit process. the PRIDE facility may be a good test bed to investigate the feasibility of a nuclear material monitoring system. In this study, we designed a gamma camera system for nuclear material monitoring in the PRIDE facility by using a Monte Carlo simulation, and we validated the feasibility of this system. Two scenarios, according to locations of the gamma camera, were simulated using GATE (GEANT4 Application for Tomographic Emission) version 6. A prototype gamma camera with a diverging-slat collimator was developed, and the simulated and experimented results agreed well with each other. These results indicate that a gamma camera to monitor the nuclear material in the PRIDE facility can be developed.
Cost effective system for monitoring of fish migration with a camera
NASA Astrophysics Data System (ADS)
Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej
2016-04-01
Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.
14 CFR 23.149 - Minimum control speed.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Minimum control speed. 23.149 Section 23... Maneuverability § 23.149 Minimum control speed. (a) VMC is the calibrated airspeed at which, when the critical engine is suddenly made inoperative, it is possible to maintain control of the airplane with that engine...
Partnerships and volunteers in the U.S. Forest Service
James D. Absher
2009-01-01
The U.S. Forest Service often relies on volunteers and partnerships to help accomplish agency goals, particularly in its recreation and heritage programs. Data from agency records and a staff survey suggest that volunteer involvement is a developing area. Ongoing efforts to improve the agency's volunteer management capacity (VMC) would benefit from more attention...
14 CFR 23.149 - Minimum control speed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Minimum control speed. 23.149 Section 23... Maneuverability § 23.149 Minimum control speed. (a) VMC is the calibrated airspeed at which, when the critical... still inoperative, and thereafter maintain straight flight at the same speed with an angle of bank of...
14 CFR 23.149 - Minimum control speed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Minimum control speed. 23.149 Section 23... Maneuverability § 23.149 Minimum control speed. (a) VMC is the calibrated airspeed at which, when the critical... still inoperative, and thereafter maintain straight flight at the same speed with an angle of bank of...
Beats: Video Monitors and Cameras.
ERIC Educational Resources Information Center
Worth, Frazier
1996-01-01
Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)
Chemical accuracy from quantum Monte Carlo for the benzene dimer.
Azadi, Sam; Cohen, R E
2015-09-14
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.
NASA Astrophysics Data System (ADS)
Sun, Ning-Chen; de Grijs, Richard; Cioni, Maria-Rosa L.; Rubele, Stefano; Subramanian, Smitha; van Loon, Jacco Th.; Bekki, Kenji; Bell, Cameron P. M.; Ivanov, Valentin D.; Marconi, Marcella; Muraveva, Tatiana; Oliveira, Joana M.; Ripepi, Vincenzo
2018-05-01
In this paper we report a clustering analysis of upper main-sequence stars in the Small Magellanic Cloud, using data from the VMC survey (the VISTA near-infrared YJK s survey of the Magellanic system). Young stellar structures are identified as surface overdensities on a range of significance levels. They are found to be organized in a hierarchical pattern, such that larger structures at lower significance levels contain smaller ones at higher significance levels. They have very irregular morphologies, with a perimeter–area dimension of 1.44 ± 0.02 for their projected boundaries. They have a power-law mass–size relation, power-law size/mass distributions, and a log-normal surface density distribution. We derive a projected fractal dimension of 1.48 ± 0.03 from the mass–size relation, or of 1.4 ± 0.1 from the size distribution, reflecting significant lumpiness of the young stellar structures. These properties are remarkably similar to those of a turbulent interstellar medium, supporting a scenario of hierarchical star formation regulated by supersonic turbulence.
Hunt, J G; Watchman, C J; Bolch, W E
2007-01-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D microCT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo--VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Yang, Hualei; Yang, Xi; Heskel, Mary; ...
2017-04-28
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Hualei; Yang, Xi; Heskel, Mary
Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less
NASA Technical Reports Server (NTRS)
Bartolone, Anthony P.; Glabb, Louis J.; Hughes, Monica F.; Parrish, Russell V.
2005-01-01
Synthetic Vision Systems (SVS) displays provide pilots with a continuous view of terrain combined with integrated guidance symbology in an effort to increase situation awareness (SA) and decrease workload during operations in Instrument Meteorological Conditions (IMC). It is hypothesized that SVS displays can replicate the safety and operational flexibility of flight in Visual Meteorological Conditions (VMC), regardless of actual out-the-window (OTW) visibility or time of day. Significant progress has been made towards evolving SVS displays as well as demonstrating their ability to increase SA compared to conventional avionics in a variety of conditions. While a substantial amount of data has been accumulated demonstrating the capabilities of SVS displays, the ability of SVS to replicate the safety and operational flexibility of VMC flight performance in all visibility conditions is unknown to any specific degree. In order to more fully quantify the relationship of flight operations in IMC with SVS displays to conventional operations conducted in VMC, a fundamental comparison to current day general aviation (GA) flight instruments was warranted. Such a comparison could begin to establish the extent to which SVS display concepts are capable of maintaining an "equivalent level of safety" with the round dials they could one day replace, for both current and future operations. A combination of subjective and objective data measures were used to quantify the relationship between selected components of safety that are associated with flying an approach. Four information display methods ranging from a "round dials" baseline through a fully integrated SVS package that includes terrain, pathway based guidance, and a strategic navigation display, were investigated in this high fidelity simulation experiment. In addition, a broad spectrum of pilots, representative of the GA population, were employed for testing in an attempt to enable greater application of the results and determine if "equivalent levels of safety" are achievable through the incorporation of SVS technology regardless of a pilot's flight experience.
Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R
2018-05-01
Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.
Continuous monitoring of Hawaiian volcanoes with thermal cameras
Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.
2014-01-01
Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.
García-Salgado, Gonzalo; Rebollo, Salvador; Pérez-Camacho, Lorenzo; Martínez-Hesterkamp, Sara; Navarro, Alberto; Fernández-Pereira, José-Manuel
2015-01-01
Diet studies present numerous methodological challenges. We evaluated the usefulness of commercially available trail-cameras for analyzing the diet of Northern Goshawks (Accipiter gentilis) as a model for nesting raptors during the period 2007–2011. We compared diet estimates obtained by direct camera monitoring of 80 nests with four indirect analyses of prey remains collected from the nests and surroundings (pellets, bones, feather-and-hair remains, and feather-hair-and-bone remains combined). In addition, we evaluated the performance of the trail-cameras and whether camera monitoring affected Goshawk behavior. The sensitivity of each diet-analysis method depended on prey size and taxonomic group, with no method providing unbiased estimates for all prey sizes and types. The cameras registered the greatest number of prey items and were probably the least biased method for estimating diet composition. Nevertheless this direct method yielded the largest proportion of prey unidentified to species level, and it underestimated small prey. Our trail-camera system was able to operate without maintenance for longer periods than what has been reported in previous studies with other types of cameras. Initially Goshawks showed distrust toward the cameras but they usually became habituated to its presence within 1–2 days. The habituation period was shorter for breeding pairs that had previous experience with cameras. Using trail-cameras to monitor prey provisioning to nests is an effective tool for studying the diet of nesting raptors. However, the technique is limited by technical failures and difficulties in identifying certain prey types. Our study also shows that cameras can alter adult Goshawk behavior, an aspect that must be controlled to minimize potential negative impacts. PMID:25992956
García-Salgado, Gonzalo; Rebollo, Salvador; Pérez-Camacho, Lorenzo; Martínez-Hesterkamp, Sara; Navarro, Alberto; Fernández-Pereira, José-Manuel
2015-01-01
Diet studies present numerous methodological challenges. We evaluated the usefulness of commercially available trail-cameras for analyzing the diet of Northern Goshawks (Accipiter gentilis) as a model for nesting raptors during the period 2007-2011. We compared diet estimates obtained by direct camera monitoring of 80 nests with four indirect analyses of prey remains collected from the nests and surroundings (pellets, bones, feather-and-hair remains, and feather-hair-and-bone remains combined). In addition, we evaluated the performance of the trail-cameras and whether camera monitoring affected Goshawk behavior. The sensitivity of each diet-analysis method depended on prey size and taxonomic group, with no method providing unbiased estimates for all prey sizes and types. The cameras registered the greatest number of prey items and were probably the least biased method for estimating diet composition. Nevertheless this direct method yielded the largest proportion of prey unidentified to species level, and it underestimated small prey. Our trail-camera system was able to operate without maintenance for longer periods than what has been reported in previous studies with other types of cameras. Initially Goshawks showed distrust toward the cameras but they usually became habituated to its presence within 1-2 days. The habituation period was shorter for breeding pairs that had previous experience with cameras. Using trail-cameras to monitor prey provisioning to nests is an effective tool for studying the diet of nesting raptors. However, the technique is limited by technical failures and difficulties in identifying certain prey types. Our study also shows that cameras can alter adult Goshawk behavior, an aspect that must be controlled to minimize potential negative impacts.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
NASA Astrophysics Data System (ADS)
Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo
2018-02-01
We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
DOT National Transportation Integrated Search
2014-10-01
The goal of this project is to monitor traffic flow continuously with an innovative camera system composed of a custom : designed image sensor integrated circuit (IC) containing trapezoid pixel array and camera system that is capable of : intelligent...
Križan, Josip; Gužvica, Goran
2016-01-01
The conservation of gray wolf (Canis lupus) and its coexistence with humans presents a challenge and requires continuous monitoring and management efforts. One of the non-invasive methods that produces high-quality wolf monitoring datasets is camera trapping. We present a novel monitoring approach where camera traps are positioned on wildlife crossing structures that channel the animals, thereby increasing trapping success and increasing the cost-efficiency of the method. In this way we have followed abundance trends of five wolf packs whose home ranges are intersected by a motorway which spans throughout the wolf distribution range in Croatia. During the five-year monitoring of six green bridges we have recorded 28 250 camera-events, 132 with wolves. Four viaducts were monitored for two years, recording 4914 camera-events, 185 with wolves. We have detected a negative abundance trend of the monitored Croatian wolf packs since 2011, especially severe in the northern part of the study area. Further, we have pinpointed the legal cull as probable major negative influence on the wolf pack abundance trends (linear regression, r2 > 0.75, P < 0.05). Using the same approach we did not find evidence for a negative impact of wolves on the prey populations, both wild ungulates and livestock. We encourage strict protection of wolf in Croatia until there is more data proving population stability. In conclusion, quantitative methods, such as the one presented here, should be used as much as possible when assessing wolf abundance trends. PMID:27327498
Šver, Lidija; Bielen, Ana; Križan, Josip; Gužvica, Goran
2016-01-01
The conservation of gray wolf (Canis lupus) and its coexistence with humans presents a challenge and requires continuous monitoring and management efforts. One of the non-invasive methods that produces high-quality wolf monitoring datasets is camera trapping. We present a novel monitoring approach where camera traps are positioned on wildlife crossing structures that channel the animals, thereby increasing trapping success and increasing the cost-efficiency of the method. In this way we have followed abundance trends of five wolf packs whose home ranges are intersected by a motorway which spans throughout the wolf distribution range in Croatia. During the five-year monitoring of six green bridges we have recorded 28 250 camera-events, 132 with wolves. Four viaducts were monitored for two years, recording 4914 camera-events, 185 with wolves. We have detected a negative abundance trend of the monitored Croatian wolf packs since 2011, especially severe in the northern part of the study area. Further, we have pinpointed the legal cull as probable major negative influence on the wolf pack abundance trends (linear regression, r2 > 0.75, P < 0.05). Using the same approach we did not find evidence for a negative impact of wolves on the prey populations, both wild ungulates and livestock. We encourage strict protection of wolf in Croatia until there is more data proving population stability. In conclusion, quantitative methods, such as the one presented here, should be used as much as possible when assessing wolf abundance trends.
Can camera traps monitor Komodo dragons a large ectothermic predator?
Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S
2013-01-01
Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.
Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?
Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.
2013-01-01
Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site*survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species. PMID:23527027
Use of a color CMOS camera as a colorimeter
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Redford, Gary R.
2006-08-01
In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.
Stereoscopic Configurations To Minimize Distortions
NASA Technical Reports Server (NTRS)
Diner, Daniel B.
1991-01-01
Proposed television system provides two stereoscopic displays. Two-camera, two-monitor system used in various camera configurations and with stereoscopic images on monitors magnified to various degrees. Designed to satisfy observer's need to perceive spatial relationships accurately throughout workspace or to perceive them at high resolution in small region of workspace. Potential applications include industrial, medical, and entertainment imaging and monitoring and control of telemanipulators, telerobots, and remotely piloted vehicles.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
Design Description of the X-33 Avionics Architecture
NASA Technical Reports Server (NTRS)
Reichenfeld, Curtis J.; Jones, Paul G.
1999-01-01
In this paper, we provide a design description of the X-33 avionics architecture. The X-33 is an autonomous Single Stage to Orbit (SSTO) launch vehicle currently being developed by Lockheed Martin for NASA as a technology demonstrator for the VentureStar Reusable Launch Vehicle (RLV). The X-33 avionics provides autonomous control of die vehicle throughout takeoff, ascent, descent, approach, landing, rollout, and vehicle safing. During flight the avionics provides communication to the range through uplinked commands and downlinked telemetry. During pre-launch and post-safing activities, the avionics provides interfaces to ground support consoles that perform vehicle flight preparations and maintenance. The X-33 Avionics is a hybrid of centralized and distributed processing elements connected by three dual redundant Mil-Std 1553 data buses. These data buses are controlled by a central processing suite located in the avionics bay and composed of triplex redundant Vehicle Mission Computers (VMCs). The VMCs integrate mission management, guidance, navigation, flight control, subsystem control and redundancy management functions. The vehicle sensors, effectors and subsystems are interfaced directly to the centralized VMCs as remote terminals or through dual redundant Data Interface Units (DIUs). The DIUs are located forward and aft of the avionics bay and provide signal conditioning, health monitoring, low level subsystem control and data interface functions. Each VMC is connected to all three redundant 1553 data buses for monitoring and provides a complete identical data set to the processing algorithms. This enables bus faults to be detected and reconfigured through a voted bus control configuration. Data is also shared between VMCs though a cross channel data link that is implemented in hardware and controlled by AlliedSignal's Fault Tolerant Executive (FTE). The FTE synchronizes processors within the VMC and synchronizes redundant VMCs to each other. The FTE provides an output-voting plane to detect, isolate and contain faults due to internal hardware or software faults and reconfigures the VMCs to accommodate these faults. Critical data in the 1553 messages are scheduled and synchronized to specific processing frames in order to minimize data latency. In order to achieve an open architecture, military and commercial off-the-shelf equipment is incorporated using common processors, standard VME backplanes and chassis, the VxWorks operating system, and MartixX for automatic code generation. The use of off-the-shelf tools and equipment helps reduce development time and enables software reuse. The open architecture allows for technology insertion, while the distributed modular elements allow for expansion to increased redundancy levels to meet the higher reliability goals of future RLVs.
Development of an all-in-one gamma camera/CCD system for safeguard verification
NASA Astrophysics Data System (ADS)
Kim, Hyun-Il; An, Su Jung; Chung, Yong Hyun; Kwak, Sung-Woo
2014-12-01
For the purpose of monitoring and verifying efforts at safeguarding radioactive materials in various fields, a new all-in-one gamma camera/charged coupled device (CCD) system was developed. This combined system consists of a gamma camera, which gathers energy and position information on gamma-ray sources, and a CCD camera, which identifies the specific location in a monitored area. Therefore, 2-D image information and quantitative information regarding gamma-ray sources can be obtained using fused images. A gamma camera consists of a diverging collimator, a 22 × 22 array CsI(Na) pixelated scintillation crystal with a pixel size of 2 × 2 × 6 mm3 and Hamamatsu H8500 position-sensitive photomultiplier tube (PSPMT). The Basler scA640-70gc CCD camera, which delivers 70 frames per second at video graphics array (VGA) resolution, was employed. Performance testing was performed using a Co-57 point source 30 cm from the detector. The measured spatial resolution and sensitivity were 4.77 mm full width at half maximum (FWHM) and 7.78 cps/MBq, respectively. The energy resolution was 18% at 122 keV. These results demonstrate that the combined system has considerable potential for radiation monitoring.
Video monitoring system for car seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.
Improved head-controlled TV system produces high-quality remote image
NASA Technical Reports Server (NTRS)
Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.
1967-01-01
Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.
Radiation-Triggered Surveillance for UF6 Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Michael M.
2015-12-01
This paper recommends the use of radiation detectors, singly or in sets, to trigger surveillance cameras. Ideally, the cameras will monitor cylinders transiting the process area as well as the process area itself. The general process area will be surveyed to record how many cylinders have been attached and detached to the process between inspections. Rad-triggered cameras can dramatically reduce the quantity of recorded images, because the movement of personnel and equipment not involving UF6 cylinders will not generate a surveillance review file.
Code of Federal Regulations, 2010 CFR
2010-01-01
... drag position, and the other engines at maximum takeoff power; and (4) The airplane trimmed at a speed equal to the greater of 1.2 VS1 or 1.1 VMC, or as nearly as possible in trim for straight flight. (c... approach; and (4) The airplane trimmed at VREF. [Amdt. 23-14, 38 FR 31819, Nov. 19, 1973, as amended by...
Code of Federal Regulations, 2011 CFR
2011-01-01
... drag position, and the other engines at maximum takeoff power; and (4) The airplane trimmed at a speed equal to the greater of 1.2 VS1 or 1.1 VMC, or as nearly as possible in trim for straight flight. (c... approach; and (4) The airplane trimmed at VREF. [Amdt. 23-14, 38 FR 31819, Nov. 19, 1973, as amended by...
The VMC Survey. XI. Radial Stellar Population Gradients in the Galactic Globular Cluster 47 Tucanae
NASA Astrophysics Data System (ADS)
Li, Chengyuan; de Grijs, Richard; Deng, Licai; Rubele, Stefano; Wang, Chuchu; Bekki, Kenji; Cioni, Maria-Rosa L.; Clementini, Gisella; Emerson, Jim; For, Bi-Qing; Girardi, Leo; Groenewegen, Martin A. T.; Guandalini, Roald; Gullieuszik, Marco; Marconi, Marcella; Piatti, Andrés E.; Ripepi, Vincenzo; van Loon, Jacco Th.
2014-07-01
We present a deep near-infrared color-magnitude diagram of the Galactic globular cluster 47 Tucanae, obtained with the Visible and Infrared Survey Telescope for Astronomy (VISTA) as part of the VISTA near-infrared Y, J, K s survey of the Magellanic System (VMC). The cluster stars comprising both the subgiant and red giant branches exhibit apparent, continuous variations in color-magnitude space as a function of radius. Subgiant branch stars at larger radii are systematically brighter than their counterparts closer to the cluster core; similarly, red-giant-branch stars in the cluster's periphery are bluer than their more centrally located cousins. The observations can very well be described by adopting an age spread of ~0.5 Gyr as well as radial gradients in both the cluster's helium abundance (Y) and metallicity (Z), which change gradually from (Y = 0.28, Z = 0.005) in the cluster core to (Y = 0.25, Z = 0.003) in its periphery. We conclude that the cluster's inner regions host a significant fraction of second-generation stars, which decreases with increasing radius; the stellar population in the 47 Tuc periphery is well approximated by a simple stellar population.
The VMC survey. XI. Radial stellar population gradients in the galactic globular cluster 47 Tucanae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chengyuan; De Grijs, Richard; Deng, Licai
2014-07-20
We present a deep near-infrared color-magnitude diagram of the Galactic globular cluster 47 Tucanae, obtained with the Visible and Infrared Survey Telescope for Astronomy (VISTA) as part of the VISTA near-infrared Y, J, K{sub s} survey of the Magellanic System (VMC). The cluster stars comprising both the subgiant and red giant branches exhibit apparent, continuous variations in color-magnitude space as a function of radius. Subgiant branch stars at larger radii are systematically brighter than their counterparts closer to the cluster core; similarly, red-giant-branch stars in the cluster's periphery are bluer than their more centrally located cousins. The observations can verymore » well be described by adopting an age spread of ∼0.5 Gyr as well as radial gradients in both the cluster's helium abundance (Y) and metallicity (Z), which change gradually from (Y = 0.28, Z = 0.005) in the cluster core to (Y = 0.25, Z = 0.003) in its periphery. We conclude that the cluster's inner regions host a significant fraction of second-generation stars, which decreases with increasing radius; the stellar population in the 47 Tuc periphery is well approximated by a simple stellar population.« less
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
Using hacked point and shoot cameras for time-lapse snow cover monitoring in an Alpine valley
NASA Astrophysics Data System (ADS)
Weijs, S. V.; Diebold, M.; Mutzner, R.; Golay, J. R.; Parlange, M. B.
2012-04-01
In Alpine environments, monitoring snow cover is essential get insight in the hydrological processes and water balance. Although measurement techniques based on LIDAR are available, their cost is often a restricting factor. In this research, an experiment was done using a distributed array of cheap consumer cameras to get insight in the spatio-temporal evolution of snowpack. Two experiments are planned. The first involves the measurement of eolic snow transport around a hill, to validate a snow saltation model. The second monitors the snowmelt during the melting season, which can then be combined with data from a wireless network of meteorological stations and discharge measurements at the outlet of the catchment. The poster describes the hardware and software setup, based on an external timer circuit and CHDK, the Canon Hack Development Kit. This latter is a flexible and developing software package, released under a GPL license. It was developed by hackers that reverse engineered the firmware of the camera and added extra functionality such as raw image output, more full control of the camera, external trigger and motion detection, and scripting. These features make it a great tool for geosciences. Possible other applications involve aerial stereo photography, monitoring vegetation response. We are interested in sharing experiences and brainstorming about new applications. Bring your camera!
NASA Astrophysics Data System (ADS)
Bertaux, Jean-Loup; Hauchecorne, Alain; khatuntsev, Igor; Markiewicz, Wojciech; Marcq, emmanuel; Lebonnois, Sebastien; Patsaeva, Marina; Turin, Alexander; Fedorova, Anna
2016-10-01
Based on the analysis of UV images (at 365 nm) of Venus cloud top (altitude 67±2 km) collected with VMC (Venus Monitoring Camera) on board Venus Express (VEX), it is found that the zonal wind speed south of the equator (from 5°S to 15°S) shows a conspicuous variation (from -101 to -83 m/s) with geographic longitude of Venus, correlated with the underlying relief of Aphrodite Terra. We interpret this pattern as the result of stationary gravity waves produced at ground level by the up lift of air when the horizontal wind encounters a mountain slope. These waves can propagate up to the cloud top level, break there and transfer their momentum to the zonal flow. Such upward propagation of gravity waves and influence on the wind speed vertical profile was shown to play an important role in the middle atmosphere of the Earth but is not reproduced in the current GCM of Venus atmosphere from LMD.In the equatorial regions, the UV albedo of clouds at 365 nm and the H2O mixing ratio at cloud top varies also with longitude, with an anti-correlation: the more H2O, the darker are the clouds. We argue that these variations may be simply explained by the divergence of the horizontal wind field. In the longitude region (from 60° to -10°) where the horizontal wind speed is increasing in magnitude (stretch), it triggers air upwelling which brings both the UV absorber and H2O at cloud top level and decreases the albedo, and vice-versa when the wind is decreasing in magnitude (compression). This picture is fully consistent with the classical view of Venus meridional circulation, with upwelling at equator revealed by horizontal air motions away from equator: the longitude effect is only an additional but important modulation of this effect. We argue that H2O enhancement is the sign of upwelling because the H2O mixing ratio decreases with altitude, comforting the view that the UV absorber is also brought to cloud top by upwelling.
NASA Astrophysics Data System (ADS)
McCabe, Ryan M.; Gunnarson, Jacob; Sayanagi, Kunio M.; Blalock, John J.; Peralta, Javier; Gray, Candace L.; McGouldrick, Kevin; Imamura, Takeshi; Watanabe, Shigeto
2017-10-01
We investigate the horizontal dynamics of Venus’s atmosphere at cloud-top level. In particular, we focus on the atmospheric superrotation, in which the equatorial atmosphere rotates with a period of approximately 4-5 days (~60 times faster than the solid planet). The superrotation’s forcing and maintenance mechanisms remain to be explained. Temporal evolution of the zonal (latitudinal direction) wind could reveal the transport of energy and momentum in/out of the equatorial region, and eventually shed light on mechanisms that maintain the Venusian superrotation. As a first step, we characterize the zonal mean wind field of Venus between 2006 and 2013 in ultraviolet images captured by the Venus Monitoring Camera (VMC) on board the ESA Venus Express (VEX) spacecraft which observed Venus’s southern hemisphere. Our measurements show that, between 2006 and 2013, the westward wind speed at mid- to equatorial latitudes exhibit an increase of ~20 m/s; these results are consistent with previous studies by Kouyama et al. 2013 and Khatuntsev et al. 2013. The meridional component of the wind could additionally help us characterize large-scale cloud features and their evolution that may be connected to such superrotation. We also conduct ground-based observations contemporaneously with JAXA’s Akatsuki orbiter at the 3.5 m Astrophysical Research Consortium (ARC) telescope at the Apache Point Observatory (APO) in Sunspot, NM to extend our temporal coverage to present. Images we have captured at APO to date demonstrate that, even under unfavorable illumination, it is possible to see large features that could be used for large-scale feature tracking to be compared to images taken by Akatsuki. Our work has been supported by the following grants: NASA PATM NNX14AK07G, NASA MUREP NNX15AQ03A, NSF AAG 1212216, and JAXA’s ITYF Fellowship.Kouyama, T. et al (2013), J. Geophys. Res. Planets, 118, 37-46, doi:10.1029/2011JE004013.Khatuntsev et al. (2013), Icarus, 226, 140-158, doi:10.1016/j.icarus.2013.05.018
NASA Astrophysics Data System (ADS)
Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo
2015-04-01
Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
NASA Astrophysics Data System (ADS)
Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim
2016-04-01
Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can detect upwelling in the nearshore zone, and enhances the safety of beach users. All data can be presented in real- or quasi-real time and are stored for future analysis and training/validation of coastal processes models. Acknowledgements: This work was supported by the project BEACHTOUR (11SYN-8-1466) of the Operational Program "Cooperation 2011, Competitiveness and Entrepreneurship", co-funded by the European Regional Development Fund and the Greek Ministry of Education and Religious Affairs.
Instrumentation for Infrared Airglow Clutter.
1987-03-10
gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube
Using Arago's spot to monitor optical axis shift in a Petzval refractor.
Bruns, Donald G
2017-03-10
Measuring the change in the optical alignment of a camera attached to a telescope is necessary to perform astrometric measurements. Camera movement when the telescope is refocused changes the plate constants, invalidating the calibration. Monitoring the shift in the optical axis requires a stable internal reference source. This is easily implemented in a Petzval refractor by adding an illuminated pinhole and a small obscuration that creates a spot of Arago on the camera. Measurements of the optical axis shift for a commercial telescope are given as an example.
Optimising Camera Traps for Monitoring Small Mammals
Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce
2013-01-01
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Micro-Imagers for Spaceborne Cell-Growth Experiments
NASA Technical Reports Server (NTRS)
Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen
2006-01-01
A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.
Continuous monitoring of Hawaiian volcanoes using thermal cameras
NASA Astrophysics Data System (ADS)
Patrick, M. R.; Orr, T. R.; Antolik, L.; Lee, R.; Kamibayashi, K.
2012-12-01
Thermal cameras are becoming more common at volcanoes around the world, and have become a powerful tool for observing volcanic activity. Fixed, continuously recording thermal cameras have been installed by the Hawaiian Volcano Observatory in the last two years at four locations on Kilauea Volcano to better monitor its two ongoing eruptions. The summit eruption, which began in March 2008, hosts an active lava lake deep within a fume-filled vent crater. A thermal camera perched on the rim of Halema`uma`u Crater, acquiring an image every five seconds, has now captured about two years of sustained lava lake activity, including frequent lava level fluctuations, small explosions , and several draining events. This thermal camera has been able to "see" through the thick fume in the crater, providing truly 24/7 monitoring that would not be possible with normal webcams. The east rift zone eruption, which began in 1983, has chiefly consisted of effusion through lava tubes onto the surface, but over the past two years has been interrupted by an intrusion, lava fountaining, crater collapse, and perched lava lake growth and draining. The three thermal cameras on the east rift zone, all on Pu`u `O`o cone and acquiring an image every several minutes, have captured many of these changes and are providing an improved means for alerting observatory staff of new activity. Plans are underway to install a thermal camera at the summit of Mauna Loa to monitor and alert to any future changes there. Thermal cameras are more difficult to install, and image acquisition and processing are more complicated than with visual webcams. Our system is based in part on the successful thermal camera installations by Italian volcanologists on Stromboli and Vulcano. Equipment includes custom enclosures with IR transmissive windows, power, and telemetry. Data acquisition is based on ActiveX controls, and data management is done using automated Matlab scripts. Higher-level data processing, also done with Matlab, includes automated measurements of lava lake level and surface crust velocity, tracking temperatures and hot areas in real-time, and alerts which notify users of notable temperature increases via text messaging. Lastly, real-time image and processed data display, which is vital for effective use of the images at the observatory, is done through a custom Web-based environment . Near real-time webcam images are displayed for the public at hvo.wr.usgs.gov/cams. Thermal cameras are costly, but have proven to be an extremely effective monitoring and research tool at the Hawaiian Volcano Observatory.
Traffic monitoring with distributed smart cameras
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert
2012-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, Richard Parker, with NASA, watches a monitor showing images from a camera inserted beneath tiles of the orbiter Endeavour to inspect for corrosion.
Maritime microwave radar and electro-optical data fusion for homeland security
NASA Astrophysics Data System (ADS)
Seastrand, Mark J.
2004-09-01
US Customs is responsible for monitoring all incoming air and maritime traffic, including the island of Puerto Rico as a US territory. Puerto Rico offers potentially obscure points of entry to drug smugglers. This environment sets forth a formula for an illegal drug trade - based relatively near the continental US. The US Customs Caribbean Air and Marine Operations Center (CAMOC), located in Puntas Salinas, has the charter to monitor maritime and Air Traffic Control (ATC) radars. The CAMOC monitors ATC radars and advises the Air and Marine Branch of US Customs of suspicious air activity. In turn, the US Coast Guard and/or US Customs will launch air and sea assets as necessary. The addition of a coastal radar and camera system provides US Customs a maritime monitoring capability for the northwestern end of Puerto Rico (Figure 1). Command and Control of the radar and camera is executed at the CAMOC, located 75 miles away. The Maritime Microwave Surveillance Radar performs search, primary target acquisition and target tracking while the Midwave Infrared (MWIR) camera performs target identification. This wide area surveillance, using a combination of radar and MWIR camera, offers the CAMOC a cost and manpower effective approach to monitor, track and identify maritime targets.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
Don't get burned: thermal monitoring of vessel sealing using a miniature infrared camera
NASA Astrophysics Data System (ADS)
Lin, Shan; Fichera, Loris; Fulton, Mitchell J.; Webster, Robert J.
2017-03-01
Miniature infrared cameras have recently come to market in a form factor that facilitates packaging in endoscopic or other minimally invasive surgical instruments. If absolute temperature measurements can be made with these cameras, they may be useful for non-contact monitoring of electrocautery-based vessel sealing, or other thermal surgical processes like thermal ablation of tumors. As a first step in evaluating the feasibility of optical medical thermometry with these new cameras, in this paper we explore how well thermal measurements can be made with them. These cameras measure the raw flux of incoming IR radiation, and we perform a calibration procedure to map their readings to absolute temperature values in the range between 40 and 150 °C. Furthermore, we propose and validate a method to estimate the spatial extent of heat spread created by a cautery tool based on the thermal images.
Robot Tracer with Visual Camera
NASA Astrophysics Data System (ADS)
Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin
2017-12-01
Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert
2011-01-01
A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…
DOT National Transportation Integrated Search
2004-10-01
The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...
NASA Astrophysics Data System (ADS)
Brewer, I. D.; Werner, C. A.; Nadeau, P. A.
2010-12-01
UV camera systems are gaining popularity worldwide for quantifying SO2 column abundances and emission rates from volcanoes, which serve as primary measures of volcanic hazard and aid in eruption forecasting. To date many of the investigations have focused on fairly active and routinely monitored volcanoes under optimal conditions. Some recent studies have begun to recommend protocols and procedures for data collection, but additional questions still need to be addressed. In this study we attempt to answer these questions, and also present results from volcanoes that are rarely monitored. Conditions at these volcanoes are typically sub-optimal for UV camera measurements. Discussion of such data is essential in the assessment of the wider applicability of UV camera measurements for SO2 monitoring purposes. Data discussed herein consists of plume images from volcanoes with relatively low emission rates, with varying weather conditions and from various distances (2-12 km). These include Karangatang Volcano (Indonesia), Mount St. Helens (Washington, USA), and Augustine and Redoubt Volcanoes (Alaska, USA). High emission rate data were also collected at Kilauea Volcano (Hawaii, USA), and blue sky test images with no plume were collected at Mammoth Mountain (California, USA). All data were collected between 2008 and 2010 using both single-filter (307 nm) and dual-filter (307 nm/326 nm) systems and were accompanied by FLYSPEC measurements. With the dual-filter systems, both a filter wheel setup and a synchronous-imaging dual-camera setup were employed. Data collection and processing questions included (1) what is the detection limit of the camera, (2) how large is the variability in raw camera output, (3) how do camera optics affect the measurements and how can this be corrected, (4) how much variability is observed in calibration under various conditions, (5) what is the optimal workflow for image collection and processing, and (6) what is the range of camera operating conditions? Besides emission rates from these infrequently monitored volcanoes, the results of this study include a recommended workflow and procedure for image collection and calibration, and a MATLAB-based algorithm for batch processing, thereby enabling accurate emission rates at 1 Hz when a synchronous-imaging dual-camera setup is used.
Latency in Visionic Systems: Test Methods and Requirements
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.
2005-01-01
A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.
Uniform electron gases. III. Low-density gases on three-dimensional spheres.
Agboola, Davids; Knol, Anneke L; Gill, Peter M W; Loos, Pierre-François
2015-08-28
By combining variational Monte Carlo (VMC) and complete-basis-set limit Hartree-Fock (HF) calculations, we have obtained near-exact correlation energies for low-density same-spin electrons on a three-dimensional sphere (3-sphere), i.e., the surface of a four-dimensional ball. In the VMC calculations, we compare the efficacies of two types of one-electron basis functions for these strongly correlated systems and analyze the energy convergence with respect to the quality of the Jastrow factor. The HF calculations employ spherical Gaussian functions (SGFs) which are the curved-space analogs of Cartesian Gaussian functions. At low densities, the electrons become relatively localized into Wigner crystals, and the natural SGF centers are found by solving the Thomson problem (i.e., the minimum-energy arrangement of n point charges) on the 3-sphere for various values of n. We have found 11 special values of n whose Thomson sites are equivalent. Three of these are the vertices of four-dimensional Platonic solids - the hyper-tetrahedron (n = 5), the hyper-octahedron (n = 8), and the 24-cell (n = 24) - and a fourth is a highly symmetric structure (n = 13) which has not previously been reported. By calculating the harmonic frequencies of the electrons around their equilibrium positions, we also find the first-order vibrational corrections to the Thomson energy.
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
NASA Astrophysics Data System (ADS)
Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team
2018-01-01
A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Linkosalmi, Maiju; Melih Tanis, Cemal; Tuovinen, Juha-Pekka; Nadir Arslan, Ali
2018-01-01
In recent years, monitoring of the status of ecosystems using low-cost web (IP) or time lapse cameras has received wide interest. With broad spatial coverage and high temporal resolution, networked cameras can provide information about snow cover and vegetation status, serve as ground truths to Earth observations and be useful for gap-filling of cloudy areas in Earth observation time series. Networked cameras can also play an important role in supplementing laborious phenological field surveys and citizen science projects, which also suffer from observer-dependent observation bias. We established a network of digital surveillance cameras for automated monitoring of phenological activity of vegetation and snow cover in the boreal ecosystems of Finland. Cameras were mounted at 14 sites, each site having 1-3 cameras. Here, we document the network, basic camera information and access to images in the permanent data repository (http://www.zenodo.org/communities/phenology_camera/). Individual DOI-referenced image time series consist of half-hourly images collected between 2014 and 2016 (https://doi.org/10.5281/zenodo.1066862). Additionally, we present an example of a colour index time series derived from images from two contrasting sites.
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2015-12-01
The ability to detect and monitor precursory events, thermal signatures, and ongoing volcanic activity in near-realtime is an invaluable tool. Volcanic hazards often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash to aircraft cruise altitudes. Using ground based remote sensing to detect and monitor this activity is essential, but the required equipment is often expensive and difficult to maintain, which increases the risk to public safety and the likelihood of financial impact. Our investigation explores the use of 'off the shelf' cameras, ranging from computer webcams to low-light security cameras, to monitor volcanic incandescent activity in near-realtime. These cameras are ideal as they operate in the visible and near-infrared (NIR) portions of the electromagnetic spectrum, are relatively cheap to purchase, consume little power, are easily replaced, and can provide telemetered, near-realtime data. We focus on the early detection of volcanic activity, using automated scripts that capture streaming online webcam imagery and evaluate each image according to pixel brightness, in order to automatically detect and identify increases in potentially hazardous activity. The cameras used here range in price from 0 to 1,000 and the script is written in Python, an open source programming language, to reduce the overall cost to potential users and increase the accessibility of these tools, particularly in developing nations. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures to be correlated to pixel brightness. Data collected from several volcanoes; (1) Stromboli, Italy (2) Shiveluch, Russia (3) Fuego, Guatemala (4) Popcatépetl, México, along with campaign data from Stromboli (June, 2013), and laboratory tests are presented here.
Development of camera technology for monitoring nests. Chapter 15
W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson
2012-01-01
Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...
Keever, Allison; McGowan, Conor P.; Ditchkoff, Stephen S.; Acker, S.A.; Grand, James B.; Newbolt, Chad H.
2017-01-01
Automated cameras have become increasingly common for monitoring wildlife populations and estimating abundance. Most analytical methods, however, fail to account for incomplete and variable detection probabilities, which biases abundance estimates. Methods which do account for detection have not been thoroughly tested, and those that have been tested were compared to other methods of abundance estimation. The goal of this study was to evaluate the accuracy and effectiveness of the N-mixture method, which explicitly incorporates detection probability, to monitor white-tailed deer (Odocoileus virginianus) by using camera surveys and a known, marked population to collect data and estimate abundance. Motion-triggered camera surveys were conducted at Auburn University’s deer research facility in 2010. Abundance estimates were generated using N-mixture models and compared to the known number of marked deer in the population. We compared abundance estimates generated from a decreasing number of survey days used in analysis and by time periods (DAY, NIGHT, SUNRISE, SUNSET, CREPUSCULAR, ALL TIMES). Accurate abundance estimates were generated using 24 h of data and nighttime only data. Accuracy of abundance estimates increased with increasing number of survey days until day 5, and there was no improvement with additional data. This suggests that, for our system, 5-day camera surveys conducted at night were adequate for abundance estimation and population monitoring. Further, our study demonstrates that camera surveys and N-mixture models may be a highly effective method for estimation and monitoring of ungulate populations.
Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan
2017-11-01
single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.
Towards fish-eye camera based in-home activity assessment.
Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha
2008-01-01
Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues.
Laparoscopic female sterilisation by a single port through monitor--a better alternative.
Sewta, Rajender Singh
2011-04-01
Female sterilisation by tubal occlusion method by laparocator is most widely used and accepted technique of all family planning measures all over the world. After the development of laparoscopic surgery in all faculties of surgery by monitor, now laparoscopic female sterilisation has been developed to do under monitor control by two ports--one for laparoscope and second for ring applicator. But the technique has been modified using single port with monitor through laparocator in which camera is fitted on the eye piece of laparocator (the same laparocator which is commonly used in camps without monitor since a long time in India). In this study over a period of about 2 years, a total 2011 cases were operated upon. In this study, I used camera and monitor through a single port by laparocator to visualise as well as to apply ring on fallopian tubes. The result is excellent and is a better alternative to conventional laparoscopic sterilisation and double puncture technique through camera--which give two scars and an extra assistant is required. However, there was no failure and the strain on surgeon's eye was minimum. Single port is much easier, safe, equally effective and better acceptable method.
Weather and atmosphere observation with the ATOM all-sky camera
NASA Astrophysics Data System (ADS)
Jankowsky, Felix; Wagner, Stefan
2015-03-01
The Automatic Telescope for Optical Monitoring (ATOM) for H.E.S.S. is an 75 cm optical telescope which operates fully automated. As there is no observer present during observation, an auxiliary all-sky camera serves as weather monitoring system. This device takes an all-sky image of the whole sky every three minutes. The gathered data then undergoes live-analysis by performing astrometric comparison with a theoretical night sky model, interpreting the absence of stars as cloud coverage. The sky monitor also serves as tool for a meteorological analysis of the observation site of the the upcoming Cherenkov Telescope Array. This overview covers design and benefits of the all-sky camera and additionally gives an introduction into current efforts to integrate the device into the atmosphere analysis programme of H.E.S.S.
Studying the Variability of Bright Stars with the CONCAM Sky Monitoring Network
NASA Astrophysics Data System (ADS)
Pereira, W. E.; Nemiroff, R. J.; Rafert, J. B.; Perez-Ramirez, D.
2001-12-01
CONCAMs have now been deployed at some of the world's major observatories including KPNO in Arizona, Mauna Kea in Hawaii, and Wise Observatory in Israel. Data from these mobile, inexpensive and continuous sky cameras, consisting of a fish-eye lens mated to a CCD camera and run by a laptop, has been ever-increasing. Initial efforts to carry out photometric analysis of CONCAM fits images have now been fortified by a more automated technique of analyzing this data. Results of such analyses - variability of several bright stars, in particular, are presented, as well as the use of these cameras as cloud monitors to remote observers.
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring
NASA Astrophysics Data System (ADS)
Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.
2014-12-01
Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
A semi-automated method of monitoring dam passage of American Eels Anguilla rostrata
Welsh, Stuart A.; Aldinger, Joni L.
2014-01-01
Fish passage facilities at dams have become an important focus of fishery management in riverine systems. Given the personnel and travel costs associated with physical monitoring programs, automated or semi-automated systems are an attractive alternative for monitoring fish passage facilities. We designed and tested a semi-automated system for eel ladder monitoring at Millville Dam on the lower Shenandoah River, West Virginia. A motion-activated eel ladder camera (ELC) photographed each yellow-phase American Eel Anguilla rostrata that passed through the ladder. Digital images (with date and time stamps) of American Eels allowed for total daily counts and measurements of eel TL using photogrammetric methods with digital imaging software. We compared physical counts of American Eels with camera-based counts; TLs obtained with a measuring board were compared with TLs derived from photogrammetric methods. Data from the ELC were consistent with data obtained by physical methods, thus supporting the semi-automated camera system as a viable option for monitoring American Eel passage. Time stamps on digital images allowed for the documentation of eel passage time—data that were not obtainable from physical monitoring efforts. The ELC has application to eel ladder facilities but can also be used to monitor dam passage of other taxa, such as crayfishes, lampreys, and water snakes.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
System selects framing rate for spectrograph camera
NASA Technical Reports Server (NTRS)
1965-01-01
Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.
Orr, Tim R.; Hoblitt, Richard P.
2008-01-01
Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P.; Dehn, J.
2014-12-01
Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.
NASA Astrophysics Data System (ADS)
Harrild, Martin; Webley, Peter; Dehn, Jonathan
2015-04-01
Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.
NASA Astrophysics Data System (ADS)
Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng
2009-08-01
A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.
Infrared-enhanced TV for fire detection
NASA Technical Reports Server (NTRS)
Hall, J. R.
1978-01-01
Closed-circuit television is superior to conventional smoke or heat sensors for detecting fires in large open spaces. Single TV camera scans entire area, whereas many conventional sensors and maze of interconnecting wiring might be required to get same coverage. Camera is monitored by person who would trip alarm if fire were detected, or electronic circuitry could process camera signal for fully-automatic alarm system.
Cameras Monitor Spacecraft Integrity to Prevent Failures
NASA Technical Reports Server (NTRS)
2014-01-01
The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.
NASA Astrophysics Data System (ADS)
Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David
2012-06-01
Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
NASA Astrophysics Data System (ADS)
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Astronaut George Nelson working on Comet Halley Active monitoring program
1986-01-14
61C-05-026 (14 Jan. 1986) --- Astronaut George D. Nelson smiles for a fellow crew man's 35mm camera exposure while participating in the Comet Halley active monitoring program (CHAMP). Camera equipment and a protective shroud used to eliminate all cabin light interference surround the mission specialist. This is the first of three 1986 missions which are scheduled to monitor the rare visit by the comet. The principal investigators for CHAMP are S. Alan Stern of the Laboratory for Atmospheric and Space Physics at the University of Colorado; and Dr. Stephen Mende of Lockheed Palo Alto Research Laboratory.
Using oblique digital photography for alluvial sandbar monitoring and low-cost change detection
Tusso, Robert B.; Buscombe, Daniel D.; Grams, Paul E.
2015-01-01
The maintenance of alluvial sandbars is a longstanding management interest along the Colorado River in Grand Canyon. Resource managers are interested in both the long-term trend in sandbar condition and the short-term response to management actions, such as intentional controlled floods released from Glen Canyon Dam. Long-term monitoring is accomplished at a range of scales, by a combination of annual topographic survey at selected sites, daily collection of images from those sites using novel, autonomously operating, digital camera systems (hereafter referred to as 'remote cameras'), and quadrennial remote sensing of sandbars canyonwide. In this paper, we present results from the remote camera images for daily changes in sandbar topography.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
Hobbs, Michael T.; Brehme, Cheryl S.
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.
Hobbs, Michael T; Brehme, Cheryl S
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
Low Cost Wireless Network Camera Sensors for Traffic Monitoring
DOT National Transportation Integrated Search
2012-07-01
Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...
NASA Astrophysics Data System (ADS)
Tamminen, J.; Kujanpää, J.; Ojanen, H.; Saari, H.; Näkki, I.; Tukiainen, S.; Kyrölä, E.
2017-12-01
We present a novel UV camera for sulfur dioxide emission monitoring.The camera is equipped with a piezo-actuated Fabry-Perot interferometer allowing thefilter transmission to be tuned to match the differential absorption features ofsulfur dioxide in the wavelength region 305-320 nm. The differential absorption structuresare exploited to reduce the interfering effects of weakly wavelength dependent absorbers, suchas aerosols and black carbon, present in the exhaust gas. A data processing algorithm basedon two air gaps of the filter is presented allowing collection of a sufficient signal-to-noise ratio fordetecting sulfur dioxide in the ship plumes even in the designated emission control areas, such as the Baltic Seawhere the sulfur content limit of fuel oil is 0.1 %. First field tests performed inLänsisatama harbour, Helsinki Finland, indicate that sulfur dioxide can be detectedin ship plumes. The camera is light-weight and can be mounted to a drone.
NASA Astrophysics Data System (ADS)
Liu, L.; Huang, Zh.; Qiu, Zh.; Li, B.
2018-01-01
A handheld RGB camera was developed to monitor the in vivo distribution of porphyrin-based photosensitizer (PS) hematoporphyrin monomethyl ether (HMME) in blood vessels during photodynamic therapy (PDT). The focal length, f-number, International Standardization Organization (ISO) sensitivity, and shutter speed of the camera were optimized for the solution sample with various HMME concentrations. After the parameter optimization, it was found that the red intensity value of the fluorescence image was linearly related to the fluorescence intensity under investigated conditions. The RGB camera was then used to monitor the in vivo distribution of HMME in blood vessels in a skin-fold window chamber model. The red intensity value of the recorded RGB fluorescence image was found to be linearly correlated to HMME concentrations in the range 0-24 μM. Significant differences in the red to green intensity ratios were observed between the blood vessels and the surrounding tissue.
Automatic lightning detection and photographic system
NASA Technical Reports Server (NTRS)
Wojtasinski, R. J.; Holley, L. D.; Gray, J. L.; Hoover, R. B. (Inventor)
1972-01-01
A system is presented for monitoring and recording lightning strokes within a predetermined area with a camera having an electrically operated shutter with means for advancing the film in the camera after activating the shutter. The system includes an antenna for sensing lightning strikes which, in turn, generates a signal that is fed to an electronic circuit which generates signals for operating the shutter of the camera. Circuitry is provided for preventing activation of the shutter as the film in the camera is being advanced.
IR Camera Report for the 7 Day Production Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holloway, Michael Andrew
2016-02-22
The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.
New Approach for Environmental Monitoring and Plant Observation Using a Light-Field Camera
NASA Astrophysics Data System (ADS)
Schima, Robert; Mollenhauer, Hannes; Grenzdörffer, Görres; Merbach, Ines; Lausch, Angela; Dietrich, Peter; Bumberger, Jan
2015-04-01
The aim of gaining a better understanding of ecosystems and the processes in nature accentuates the need for observing exactly these processes with a higher temporal and spatial resolution. In the field of environmental monitoring, an inexpensive and field applicable imaging technique to derive three-dimensional information about plants and vegetation would represent a decisive contribution to the understanding of the interactions and dynamics of ecosystems. This is particularly true for the monitoring of plant growth and the frequently mentioned lack of morphological information about the plants, e.g. plant height, vegetation canopy, leaf position or leaf arrangement. Therefore, an innovative and inexpensive light-field (plenoptic) camera, the Lytro LF, and a stereo vision system, based on two industrial cameras, were tested and evaluated as possible measurement tools for the given monitoring purpose. In this instance, the usage of a light field camera offers the promising opportunity of providing three-dimensional information without any additional requirements during the field measurements based on one single shot, which represents a substantial methodological improvement in the area of environmental research and monitoring. Since the Lytro LF was designed as a daily-life consumer camera, it does not support depth or distance estimation or rather an external triggering by default. Therefore, different technical modifications and a calibration routine had to be figured out during the preliminary study. As a result, the used light-field camera was proven suitable as a depth and distance measurement tool with a measuring range of approximately one meter. Consequently, this confirms the assumption that a light field camera holds the potential of being a promising measurement tool for environmental monitoring purposes, especially with regard to a low methodological effort in field. Within the framework of the Global Change Experimental Facility Project, founded by the Helmholtz Centre for Environmental Research, and its large-scaled field experiments to investigate the influence of the climate change on different forms of land utilization, both techniques were installed and evaluated in a long-term experiment on a pilot-scaled maize field in late 2014. Based on this, it was possible to show the growth of the plants in dependence of time, showing a good accordance to the measurements, which were carried out by hand on a weekly basis. In addition, the experiment has shown that the light-field vision approach is applicable for the monitoring of the crop growth under field conditions, although it is limited to close range applications. Since this work was intended as a proof of concept, further research is recommended, especially with respect to the automation and evaluation of data processing. Altogether, this study is addressed to researchers as an elementary groundwork to improve the usage of the introduced light field imaging technique for the monitoring of plant growth dynamics and the three-dimensional modeling of plants under field conditions.
Monitoring Kilauea Volcano Using Non-Telemetered Time-Lapse Camera Systems
NASA Astrophysics Data System (ADS)
Orr, T. R.; Hoblitt, R. P.
2006-12-01
Systematic visual observations are an essential component of monitoring volcanic activity. At the Hawaiian Volcano Observatory, the development and deployment of a new generation of high-resolution, non- telemetered, time-lapse camera systems provides periodic visual observations in inaccessible and hazardous environments. The camera systems combine a hand-held digital camera, programmable shutter-release, and other off-the-shelf components in a package that is inexpensive, easy to deploy, and ideal for situations in which the probability of equipment loss due to volcanic activity or theft is substantial. The camera systems have proven invaluable in correlating eruptive activity with deformation and seismic data streams. For example, in late 2005 and much of 2006, Pu`u `O`o, the active vent on Kilauea Volcano`s East Rift Zone, experienced 10--20-hour cycles of inflation and deflation that correlated with increases in seismic energy release. A time-lapse camera looking into a skylight above the main lava tube about 1 km south of the vent showed an increase in lava level---an indicator of increased lava flux---during periods of deflation, and a decrease in lava level during periods of inflation. A second time-lapse camera, with a broad view of the upper part of the active flow field, allowed us to correlate the same cyclic tilt and seismicity with lava breakouts from the tube. The breakouts were accompanied by rapid uplift and subsidence of shatter rings over the tube. The shatter rings---concentric rings of broken rock---rose and subsided by as much as 6 m in less than an hour during periods of varying flux. Time-lapse imagery also permits improved assessment of volcanic hazards, and is invaluable in illustrating the hazards to the public. In collaboration with Hawaii Volcanoes National Park, camera systems have been used to monitor the growth of lava deltas at the entry point of lava into the ocean to determine the potential for catastrophic collapse.
Uniform electron gases. III. Low-density gases on three-dimensional spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agboola, Davids; Knol, Anneke L.; Gill, Peter M. W., E-mail: peter.gill@anu.edu.au
2015-08-28
By combining variational Monte Carlo (VMC) and complete-basis-set limit Hartree-Fock (HF) calculations, we have obtained near-exact correlation energies for low-density same-spin electrons on a three-dimensional sphere (3-sphere), i.e., the surface of a four-dimensional ball. In the VMC calculations, we compare the efficacies of two types of one-electron basis functions for these strongly correlated systems and analyze the energy convergence with respect to the quality of the Jastrow factor. The HF calculations employ spherical Gaussian functions (SGFs) which are the curved-space analogs of Cartesian Gaussian functions. At low densities, the electrons become relatively localized into Wigner crystals, and the natural SGFmore » centers are found by solving the Thomson problem (i.e., the minimum-energy arrangement of n point charges) on the 3-sphere for various values of n. We have found 11 special values of n whose Thomson sites are equivalent. Three of these are the vertices of four-dimensional Platonic solids — the hyper-tetrahedron (n = 5), the hyper-octahedron (n = 8), and the 24-cell (n = 24) — and a fourth is a highly symmetric structure (n = 13) which has not previously been reported. By calculating the harmonic frequencies of the electrons around their equilibrium positions, we also find the first-order vibrational corrections to the Thomson energy.« less
NASA Technical Reports Server (NTRS)
Takallu, M. A.; Wong, D. T.; Uenking, M. D.
2002-01-01
An experimental investigation was conducted to study the effectiveness of modern flight displays in general aviation cockpits for mitigating Low Visibility Loss of Control and the Controlled Flight Into Terrain accidents. A total of 18 General Aviation (GA) pilots with private pilot, single engine land rating, with no additional instrument training beyond private pilot license requirements, were recruited to evaluate three different display concepts in a fixed-based flight simulator at the NASA Langley Research Center's General Aviation Work Station. Evaluation pilots were asked to continue flight from Visual Meteorological Conditions (VMC) into Instrument Meteorological Conditions (IMC) while performing a series of 4 basic precision maneuvers. During the experiment, relevant pilot/vehicle performance variables, pilot control inputs and physiological data were recorded. Human factors questionnaires and interviews were administered after each scenario. Qualitative and quantitative data have been analyzed and the results are presented here. Pilot performance deviations from the established target values (errors) were computed and compared with the FAA Practical Test Standards. Results of the quantitative data indicate that evaluation pilots committed substantially fewer errors when using the Synthetic Vision Systems (SVS) displays than when they were using conventional instruments. Results of the qualitative data indicate that evaluation pilots perceived themselves to have a much higher level of situation awareness while using the SVS display concept.
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
NASA Astrophysics Data System (ADS)
Hata, Yutaka; Kanazawa, Seigo; Endo, Maki; Tsuchiya, Naoki; Nakajima, Hiroshi
2012-06-01
This paper proposes a heart rate monitoring system for detecting autonomic nervous system by the heart rate variability using an air pressure sensor to diagnose mental disease. Moreover, we propose a human behavior monitoring system for detecting the human trajectory in home by an infrared camera. In day and night times, the human behavior monitoring system detects the human movement in home. The heart rate monitoring system detects the heart rate in bed in night time. The air pressure sensor consists of a rubber tube, cushion cover and pressure sensor, and it detects the heart rate by setting it to bed. It unconstraintly detects the RR-intervals; thereby the autonomic nervous system can be assessed. The autonomic nervous system analysis can examine the mental disease. While, the human behavior monitoring system obtains distance distribution image by an infrared camera. It classifies adult, child and the other object from distance distribution obtained by the camera, and records their trajectories. This behavior, i.e., trajectory in home, strongly corresponds to cognitive disorders. Thus, the total system can detect mental disease and cognitive disorders by uncontacted sensors to human body.
NASA Astrophysics Data System (ADS)
Bauer, Jacob R.; van Beekum, Karlijn; Klaessens, John; Noordmans, Herke Jan; Boer, Christa; Hardeberg, Jon Y.; Verdaasdonk, Rudolf M.
2018-02-01
Non contact spatial resolved oxygenation measurements remain an open challenge in the biomedical field and non contact patient monitoring. Although point measurements are the clinical standard till this day, regional differences in the oxygenation will improve the quality and safety of care. Recent developments in spectral imaging resulted in spectral filter array cameras (SFA). These provide the means to acquire spatial spectral videos in real-time and allow a spatial approach to spectroscopy. In this study, the performance of a 25 channel near infrared SFA camera was studied to obtain spatial oxygenation maps of hands during an occlusion of the left upper arm in 7 healthy volunteers. For comparison a clinical oxygenation monitoring system, INVOS, was used as a reference. In case of the NIRS SFA camera, oxygenation curves were derived from 2-3 wavelength bands with a custom made fast analysis software using a basic algorithm. Dynamic oxygenation changes were determined with the NIR SFA camera and INVOS system at different regional locations of the occluded versus non-occluded hands and showed to be in good agreement. To increase the signal to noise ratio, algorithm and image acquisition were optimised. The measurement were robust to different illumination conditions with NIR light sources. This study shows that imaging of relative oxygenation changes over larger body areas is potentially possible in real time.
Improving the color fidelity of cameras for advanced television systems
NASA Astrophysics Data System (ADS)
Kollarits, Richard V.; Gibbon, David C.
1992-08-01
In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.
Bater, Christopher W; Coops, Nicholas C; Wulder, Michael A; Hilker, Thomas; Nielsen, Scott E; McDermid, Greg; Stenhouse, Gordon B
2011-09-01
Critical to habitat management is the understanding of not only the location of animal food resources, but also the timing of their availability. Grizzly bear (Ursus arctos) diets, for example, shift seasonally as different vegetation species enter key phenological phases. In this paper, we describe the use of a network of seven ground-based digital camera systems to monitor understorey and overstorey vegetation within species-specific regions of interest. Established across an elevation gradient in western Alberta, Canada, the cameras collected true-colour (RGB) images daily from 13 April 2009 to 27 October 2009. Fourth-order polynomials were fit to an RGB-derived index, which was then compared to field-based observations of phenological phases. Using linear regression to statistically relate the camera and field data, results indicated that 61% (r (2) = 0.61, df = 1, F = 14.3, p = 0.0043) of the variance observed in the field phenological phase data is captured by the cameras for the start of the growing season and 72% (r (2) = 0.72, df = 1, F = 23.09, p = 0.0009) of the variance in length of growing season. Based on the linear regression models, the mean absolute differences in residuals between predicted and observed start of growing season and length of growing season were 4 and 6 days, respectively. This work extends upon previous research by demonstrating that specific understorey and overstorey species can be targeted for phenological monitoring in a forested environment, using readily available digital camera technology and RGB-based vegetation indices.
NASA Astrophysics Data System (ADS)
Silva, T. S. F.; Torres, R. S.; Morellato, P.
2017-12-01
Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.
Rapid-cadence optical monitoring for short-period variability of ɛ Aurigae
NASA Astrophysics Data System (ADS)
Billings, Gary
2013-07-01
ɛ Aurigae was observed with CCD cameras and 35 mm SLR camera lenses, at rapid cadence (>1/minute), for long runs (up to 11 hours), on multiple occasions during 2009 - 2011, to monitor for variability of the system at scales of minutes to hours. The lens and camera were changed during the period to improve results, finalizing on a 135 mm focal length Canon f/2 lens (at f/2.8), an ND8 neutral density filter, a Johnson V filter, and an SBIG ST-8XME camera (Kodak KAF-1603ME microlensed chip). Differential photometry was attempted, but because of the large separation between the variable and comparison star (η Aur), noise caused by transient extinction variations was not consistently eliminated. The lowest-noise time series for searching for short-period variability proved to be the extinction-corrected instrumental magnitude of ɛ Aur obtained on "photometric nights", with η Aur used to determine and monitor the extinction coefficient for the night. No flares or short-period variations of ɛ Aur were detected by visual inspection of the light curves from observing runs with noise levels as low as 0.008 magnitudes rms.
Active landslide monitoring using remote sensing data, GPS measurements and cameras on board UAV
NASA Astrophysics Data System (ADS)
Nikolakopoulos, Konstantinos G.; Kavoura, Katerina; Depountis, Nikolaos; Argyropoulos, Nikolaos; Koukouvelas, Ioannis; Sabatakakis, Nikolaos
2015-10-01
An active landslide can be monitored using many different methods: Classical geotechnical measurements like inclinometer, topographical survey measurements with total stations or GPS and photogrammetric techniques using airphotos or high resolution satellite images. As the cost of the aerial photo campaign and the acquisition of very high resolution satellite data is quite expensive the use of cameras on board UAV could be an identical solution. Small UAVs (Unmanned Aerial Vehicles) have started their development as expensive toys but they currently became a very valuable tool in remote sensing monitoring of small areas. The purpose of this work is to demonstrate a cheap but effective solution for an active landslide monitoring. We present the first experimental results of the synergistic use of UAV, GPS measurements and remote sensing data. A six-rotor aircraft with a total weight of 6 kg carrying two small cameras has been used. Very accurate digital airphotos, high accuracy DSM, DGPS measurements and the data captured from the UAV are combined and the results are presented in the current study.
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533
Camera systems in human motion analysis for biomedical applications
NASA Astrophysics Data System (ADS)
Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.
2015-05-01
Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.
Nekton Interaction Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-15
The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less
Selby, R D; Gage, S H; Whalon, M E
2014-04-01
Incorporating camera systems into insect traps potentially benefits insect phenology modeling, nonlethal insect monitoring, and research into the automated identification of traps counts. Cameras originally for monitoring mammals were instead adapted to monitor the entrance to pyramid traps designed to capture the plum curculio, Conotrachelus nenuphar (Herbst) (Coleoptera: Curculionidae). Using released curculios, two new trap designs (v.I and v.II) were field-tested alongside conventional pyramid traps at one site in autumn 2010 and at four sites in autumn 2012. The traps were evaluated on the basis of battery power, ease-of-maintenance, adaptability, required-user-skills, cost (including labor), and accuracy-of-results. The v.II design fully surpassed expectations, except that some trapped curculios were not photographed. In 2012, 13 of the 24 traps recorded every curculio entering the traps during the 18-d study period, and in traps where some curculios were not photographed, over 90% of the omissions could be explained by component failure or external interference with the motion sensor. Significantly more curculios entered the camera traps between 1800 and 0000 hours. When compared with conventional pyramid traps, the v.I traps collected a similar number of curculios. Two observed but not significant trends were that the v.I traps collected twice as many plum curculios as the v.II traps, while at the same time the v.II traps collected more than twice as many photos per plum curculio as the v.I traps. The research demonstrates that low-cost, precise monitoring of field insect populations is feasible without requiring extensive technical expertise.
A risk-based coverage model for video surveillance camera control optimization
NASA Astrophysics Data System (ADS)
Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua
2015-12-01
Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.
An Empirical Evaluation of Language-Tailored PDLs.
1982-05-01
principles in experimental design. New York: McGraw- Hill, 1971. -21- APPENDIX A PDL FORMATS I iiCLDL4. pA; Bl.ANK(-NO1 T nui.s.i VMC~ PROGRAM...O2O70 IF XJ) LE , X(J+I)) -;0 T 230 -43- TECHNICAL REPORTS DISTRIBUTION LIST -45- OFFICE OF NAVAL RESEARCH Code 442 TECHNICAL REPORTS DISTRIBUTION LIST...OSD Department of the Navy Capt. Paul R. Chatelier Office of the Deputy Under Secretary ONR Eastern/Central Regional Officeof DefenseOREatr/etaReinl
Monitoring the spatial and temporal evolution of slope instability with Digital Image Correlation
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Glueer, Franziska; Loew, Simon
2017-04-01
The identification and monitoring of ground deformation is important for an appropriate analysis and interpretation of unstable slopes. Displacements are usually monitored with in-situ techniques (e.g., extensometers, inclinometers, geodetic leveling, tachymeters and D-GPS), and/or active remote sensing methods (e.g., LiDAR and radar interferometry). In particular situations, however, the choice of the appropriate monitoring system is constrained by site-specific conditions. Slope areas can be very remote and/or affected by rapid surface changes, thus hardly accessible, often unsafe, for field installations. In many cases the use of remote sensing approaches might be also hindered because of unsuitable acquisition geometries, poor spatial resolution and revisit times, and/or high costs. The increasing availability of digital imagery acquired from terrestrial photo and video cameras allows us nowadays for an additional source of data. The latter can be exploited to visually identify changes of the scene occurring over time, but also to quantify the evolution of surface displacements. Image processing analyses, such as Digital Image Correlation (known also as pixel-offset or feature-tracking), have demonstrated to provide a suitable alternative to detect and monitor surface deformation at high spatial and temporal resolutions. However, a number of intrinsic limitations have to be considered when dealing with optical imagery acquisition and processing, including the effects of light conditions, shadowing, and/or meteorological variables. Here we propose an algorithm to automatically select and process images acquired from time-lapse cameras. We aim at maximizing the results obtainable from large datasets of digital images acquired with different light and meteorological conditions, and at retrieving accurate information on the evolution of surface deformation. We show a successful example of application of our approach in the Swiss Alps, more specifically in the Great Aletsch area, where slope instability was recently reactivated due to the progressive glacier retreat. At this location, time-lapse cameras have been installed during the last two years, ranging from low-cost and low-resolution webcams to more expensive high-resolution reflex cameras. Our results confirm that time-lapse cameras provide quantitative and accurate measurements of surface deformation evolution over space and time, especially in situations when other monitoring instruments fail.
Unmanned Aerial Vehicles for Environmental Monitoring with Special Reference to Heat Loss
NASA Astrophysics Data System (ADS)
Anweiler, Stanisław; Piwowarski, Dawid; Ulbrich, Roman
2017-10-01
This paper presents the design and implementation of device for remote and automatic monitoring of temperature field of large objects. The project aimed to create a quadcopter flying platform equipped with a thermal imaging camera. The object of the research was district heating installations above ground and underground. The results of the work on the implementation of low-cost (below 750 EUR) and efficient heat loss monitoring system. The system consists of a small (<2kg) multirotor platform. To perform thermal images micro camera FlirOne with microcomputer Raspberry Pi3 was used. Exploitation of UAVs in temperature field monitoring reveals only a fraction of their capabilities. The fast-growing multirotor platform market continues to deliver new solutions and improvements. Their use in monitoring the environment is limited only by the imagination of the user.
Availability Issues in Wireless Visual Sensor Networks
Costa, Daniel G.; Silva, Ivanovitch; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo
2014-01-01
Wireless visual sensor networks have been considered for a large set of monitoring applications related with surveillance, tracking and multipurpose visual monitoring. When sensors are deployed over a monitored field, permanent faults may happen during the network lifetime, reducing the monitoring quality or rendering parts or the entire network unavailable. In a different way from scalar sensor networks, camera-enabled sensors collect information following a directional sensing model, which changes the notions of vicinity and redundancy. Moreover, visual source nodes may have different relevancies for the applications, according to the monitoring requirements and cameras' poses. In this paper we discuss the most relevant availability issues related to wireless visual sensor networks, addressing availability evaluation and enhancement. Such discussions are valuable when designing, deploying and managing wireless visual sensor networks, bringing significant contributions to these networks. PMID:24526301
Coherent infrared imaging camera (CIRIC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.
1995-07-01
New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less
Brownian Movement and Avogadro's Number: A Laboratory Experiment.
ERIC Educational Resources Information Center
Kruglak, Haym
1988-01-01
Reports an experimental procedure for studying Einstein's theory of Brownian movement using commercially available latex microspheres and a video camera. Describes how students can monitor sphere motions and determine Avogadro's number. Uses a black and white video camera, microscope, and TV. (ML)
Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.
2008-01-01
Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.
NASA Astrophysics Data System (ADS)
Taya, T.; Kataoka, J.; Kishimoto, A.; Tagawa, L.; Mochizuki, S.; Toshito, T.; Kimura, M.; Nagao, Y.; Kurita, K.; Yamaguchi, M.; Kawachi, N.
2017-07-01
Particle therapy is an advanced cancer therapy that uses a feature known as the Bragg peak, in which particle beams suddenly lose their energy near the end of their range. The Bragg peak enables particle beams to damage tumors effectively. To achieve precise therapy, the demand for accurate and quantitative imaging of the beam irradiation region or dosage during therapy has increased. The most common method of particle range verification is imaging of annihilation gamma rays by positron emission tomography. Not only 511-keV gamma rays but also prompt gamma rays are generated during therapy; therefore, the Compton camera is expected to be used as an on-line monitor for particle therapy, as it can image these gamma rays in real time. Proton therapy, one of the most common particle therapies, uses a proton beam of approximately 200 MeV, which has a range of ~ 25 cm in water. As gamma rays are emitted along the path of the proton beam, quantitative evaluation of the reconstructed images of diffuse sources becomes crucial, but it is far from being fully developed for Compton camera imaging at present. In this study, we first quantitatively evaluated reconstructed Compton camera images of uniformly distributed diffuse sources, and then confirmed that our Compton camera obtained 3 %(1 σ) and 5 %(1 σ) uniformity for line and plane sources, respectively. Based on this quantitative study, we demonstrated on-line gamma imaging during proton irradiation. Through these studies, we show that the Compton camera is suitable for future use as an on-line monitor for particle therapy.
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
Monitoring environmental change with color slides
Arthur W. Magill
1989-01-01
Monitoring human impact on outdoor recreation sites and view landscapes is necessary to evaluate influences which may require corrective action and to determine if management is achieving desired goals. An inexpensive method to monitor environmental change is to establish camera points and use repeat color slides. Successful monitoring from slides requires the observer...
Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques
2015-01-01
An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.
Ahumada, Jorge A.; Hurtado, Johanna; Lizcano, Diego
2013-01-01
Reducing the loss of biodiversity is key to ensure the future well being of the planet. Indicators to measure the state of biodiversity should come from primary data that are collected using consistent field methods across several sites, longitudinal, and derived using sound statistical methods that correct for observation/detection bias. In this paper we analyze camera trap data collected between 2008 and 2012 at a site in Costa Rica (Volcan Barva transect) as part of an ongoing tropical forest global monitoring network (Tropical Ecology Assessment and Monitoring Network). We estimated occupancy dynamics for 13 species of mammals, using a hierarchical modeling approach. We calculated detection-corrected species richness and the Wildlife Picture Index, a promising new indicator derived from camera trap data that measures changes in biodiversity from the occupancy estimates of individual species. Our results show that 3 out of 13 species showed significant declines in occupancy over 5 years (lowland paca, Central American agouti, nine-banded armadillo). We hypothesize that hunting, competition and/or increased predation for paca and agouti might explain these patterns. Species richness and the Wildlife Picture Index are relatively stable at the site, but small herbivores that are hunted showed a decline in diversity of about 25%. We demonstrate the usefulness of longitudinal camera trap deployments coupled with modern statistical methods and advocate for the use of this approach in monitoring and developing global and national indicators for biodiversity change. PMID:24023898
NASA Astrophysics Data System (ADS)
Bohlander, J. A.; Ross, R.; Scambos, T.; Haran, T. M.; Bauer, R. J.
2012-12-01
The Automated Meteorology - Ice/Indigenous species - Geophysics Observation System (AMIGOS) consists of a set of measurement instruments and camera(s) controlled by a single-board computer with a simplified Linux operating system and an Iridium satellite modem supporting two-way communication. Primary features of the system relevant to polar operations are low power requirements, daily data uploading, reprogramming, tolerance for low temperatures, and various approaches for automatic resets and recovery from low power or cold shut-down. Instruments include a compact weather station, C/A or dual-frequency GPS, solar flux and reflectivity sensors, sonic snow gages, simplified radio-echo-sounder, and resistance thermometer string in the firn column. In the current state of development, there are two basic designs. One is intended for in situ observations of glacier conditions. The other design supports a high-resolution camera for monitoring biological or geophysical systems from short distances (100 m to 20 km). The stations have been successfully used in several locations for operational support, monitoring rapid ice changes in response to climate change or iceberg drift, and monitoring penguin colony activity. As of June, 2012, there are 9 AMIGOS systems installed, all on the Antarctic continent. The stations are a working prototype for a planned series of upgraded stations, currently termed 'Sentinels'. These stations would carry further instrumentation, communications, and processing capability to investigate ice - ocean interaction from ice tongue, ice shelf, or fjord coastline areas.
Prototype of microbolometer thermal infrared camera for forest fire detection from space
NASA Astrophysics Data System (ADS)
Guerin, Francois; Dantes, Didier; Bouzou, Nathalie; Chorier, Philippe; Bouchardy, Anne-Marie; Rollin, Joël.
2017-11-01
The contribution of the thermal infrared (TIR) camera to the Earth observation FUEGO mission is to participate; to discriminate the clouds and smoke; to detect the false alarms of forest fires; to monitor the forest fires. Consequently, the camera needs a large dynamic range of detectable radiances. A small volume, low mass and power are required by the small FUEGO payload. These specifications can be attractive for other similar missions.
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for explaining variation of time series of satellite based indices, and it will also benefit models describing ecosystem functioning at species or plant functional type level. With the contribution of the LIFE+ financial instrument of the European Union (LIFE12 ENV/FI/000409 Monimet, http://monimet.fmi.fi)
High spatial resolution infrared camera as ISS external experiment
NASA Astrophysics Data System (ADS)
Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan
High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.
Huveneers, Charlie; Fairweather, Peter G.
2018-01-01
Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386
Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W
2016-09-01
Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.
Preliminary Study of UAS Equipped with Thermal Camera for Volcanic Geothermal Monitoring in Taiwan
Chio, Shih-Hong; Lin, Cheng-Horng
2017-01-01
Thermal infrared cameras sense the temperature information of sensed scenes. With the development of UASs (Unmanned Aircraft Systems), thermal infrared cameras can now be carried on a quadcopter UAV (Unmanned Aircraft Vehicle) to appropriately collect high-resolution thermal images for volcanic geothermal monitoring in a local area. Therefore, the quadcopter UAS used to acquire thermal images for volcanic geothermal monitoring has been developed in Taiwan as part of this study to overcome the difficult terrain with highly variable topography and extreme environmental conditions. An XM6 thermal infrared camera was employed in this thermal image collection system. The Trimble BD970 GNSS (Global Navigation Satellite System) OEM (Original Equipment Manufacturer) board was also carried on the quadcopter UAV to gather dual-frequency GNSS observations in order to determine the flying trajectory data by using the Post-Processed Kinematic (PPK) technique; this will be used to establish the position and orientation of collected thermal images with less ground control points (GCPs). The digital surface model (DSM) and thermal orthoimages were then produced from collected thermal images. Tests conducted in the Hsiaoyukeng area of Taiwan’s Yangmingshan National Park show that the difference between produced DSM and airborne LIDAR (Light Detection and Ranging) data are about 37% between −1 m and 1 m, and 66% between −2 m and 2 m in the area surrounded by GCPs. As the accuracy of thermal orthoimages is about 1.78 m, it is deemed sufficient for volcanic geothermal monitoring. In addition, the thermal orthoimages show some phenomena not only more globally than do the traditional methods for volcanic geothermal monitoring, but they also show that the developed system can be further employed in Taiwan in the future. PMID:28718790
Preliminary Study of UAS Equipped with Thermal Camera for Volcanic Geothermal Monitoring in Taiwan.
Chio, Shih-Hong; Lin, Cheng-Horng
2017-07-18
Thermal infrared cameras sense the temperature information of sensed scenes. With the development of UASs (Unmanned Aircraft Systems), thermal infrared cameras can now be carried on a quadcopter UAV (Unmanned Aircraft Vehicle) to appropriately collect high-resolution thermal images for volcanic geothermal monitoring in a local area. Therefore, the quadcopter UAS used to acquire thermal images for volcanic geothermal monitoring has been developed in Taiwan as part of this study to overcome the difficult terrain with highly variable topography and extreme environmental conditions. An XM6 thermal infrared camera was employed in this thermal image collection system. The Trimble BD970 GNSS (Global Navigation Satellite System) OEM (Original Equipment Manufacturer) board was also carried on the quadcopter UAV to gather dual-frequency GNSS observations in order to determine the flying trajectory data by using the Post-Processed Kinematic (PPK) technique; this will be used to establish the position and orientation of collected thermal images with less ground control points (GCPs). The digital surface model (DSM) and thermal orthoimages were then produced from collected thermal images. Tests conducted in the Hsiaoyukeng area of Taiwan's Yangmingshan National Park show that the difference between produced DSM and airborne LIDAR (Light Detection and Ranging) data are about 37% between -1 m and 1 m, and 66% between -2 m and 2 m in the area surrounded by GCPs. As the accuracy of thermal orthoimages is about 1.78 m, it is deemed sufficient for volcanic geothermal monitoring. In addition, the thermal orthoimages show some phenomena not only more globally than do the traditional methods for volcanic geothermal monitoring, but they also show that the developed system can be further employed in Taiwan in the future.
PBF (PER620) north facade. Camera facing south. Small metal shed ...
PBF (PER-620) north facade. Camera facing south. Small metal shed at right is Stack Gas Monitor Building, PER-629. Date: March 2004. INEEL negative no. HD-41-2-4 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
General Aviation Flight Test of Advanced Operations Enabled by Synthetic Vision
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Hughhes, Monica F.; Parrish, Russell V.; Takallu, Mohammad A.
2014-01-01
A flight test was performed to compare the use of three advanced primary flight and navigation display concepts to a baseline, round-dial concept to assess the potential for advanced operations. The displays were evaluated during visual and instrument approach procedures including an advanced instrument approach resembling a visual airport traffic pattern. Nineteen pilots from three pilot groups, reflecting the diverse piloting skills of the General Aviation pilot population, served as evaluation subjects. The experiment had two thrusts: 1) an examination of the capabilities of low-time (i.e., <400 hours), non-instrument-rated pilots to perform nominal instrument approaches, and 2) an exploration of potential advanced Visual Meteorological Conditions (VMC)-like approaches in Instrument Meteorological Conditions (IMC). Within this context, advanced display concepts are considered to include integrated navigation and primary flight displays with either aircraft attitude flight directors or Highway In The Sky (HITS) guidance with and without a synthetic depiction of the external visuals (i.e., synthetic vision). Relative to the first thrust, the results indicate that using an advanced display concept, as tested herein, low-time, non-instrument-rated pilots can exhibit flight-technical performance, subjective workload and situation awareness ratings as good as or better than high-time Instrument Flight Rules (IFR)-rated pilots using Baseline Round Dials for a nominal IMC approach. For the second thrust, the results indicate advanced VMC-like approaches are feasible in IMC, for all pilot groups tested for only the Synthetic Vision System (SVS) advanced display concept.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA) (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, USA, (behind the stand) and NASA’s Richard Parker (seated) watch the images on a monitor to inspect for corrosion.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, NASA’s Richard Parker (below left) and Peggy Ritchie, with USA, (at right) watch the images on a monitor to inspect for corrosion.
2003-09-04
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, with USA, (behind the stand) and NASA’s Richard Parker watch the images on a monitor to inspect for corrosion.
Camera traps can be heard and seen by animals.
Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg
2014-01-01
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.
NASA Astrophysics Data System (ADS)
Bartolone, Anthony P.; Glaab, Louis J.; Hughes, Monica F.; Parrish, Russell V.
2005-05-01
Synthetic Vision Systems (SVS) displays provide pilots with a continuous view of terrain combined with integrated guidance symbology in an effort to increase situation awareness (SA) and decrease workload during operations in Instrument Meteorological Conditions (IMC). It is hypothesized that SVS displays can replicate the safety and operational flexibility of flight in Visual Meteorological Conditions (VMC), regardless of actual out-the-window (OTW) visibility or time of day. Throughout the course of recent SVS research, significant progress has been made towards evolving SVS displays as well as demonstrating their ability to increase SA compared to conventional avionics in a variety of conditions. While a substantial amount of data has been accumulated demonstrating the capabilities of SVS displays, the ability of SVS to replicate the safety and operational flexibility of VMC flight performance in all visibility conditions is unknown to any specific degree. The previous piloted simulations and flight tests have shown better SA and path precision is achievable with SVS displays without causing an increase in workload, however none of the previous SVS research attempted to fully capture the significance of SVS displays in terms of their contribution to safety or operational benefits. In order to more fully quantify the relationship of flight operations in IMC with SVS displays to conventional operations conducted in VMC, a fundamental comparison to current day general aviation (GA) flight instruments was warranted. Such a comparison could begin to establish the extent to which SVS display concepts are capable of maintaining an "equivalent level of safety" with the round dials they could one day replace, for both current and future operations. Such a comparison was the focus of the SVS-ES experiment conducted under the Aviation Safety and Security Program's (AvSSP) GA Element of the SVS Project at NASA Langley Research Center in Hampton, Virginia. A combination of subjective and objective data measures were used in this preliminary research to quantify the relationship between selected components of safety that are associated with flying an approach. Four information display methods ranging from a "round dials" baseline through a fully integrated SVS package that includes terrain, pathway based guidance, and a strategic navigation display, were investigated in this high fidelity simulation experiment. In addition, a broad spectrum of pilots, representative of the GA population, were employed for testing in an attempt to enable greater application of the results and determine if "equivalent levels of safety" are achievable through the incorporation of SVS technology regardless of a pilot's flight experience.
Lapik, I A; Sokol'nikov, A A; Sharafetdinov, Kh Kh; Sentsova, T B; Plotnikova, O A
2014-01-01
The influence of diet inclusion of vitamin and mineral complex (VMC), potassium and magnesium in the form of asparaginate on micronutrient status, body composition and biochemical parameters in patients with diabetes mellitus type 2 (DM2) has been investigated. 120 female patients with DM2 and obesity of I-III degree (mean age - 58 +/- 6 years) have been included in the study. The patients were divided into two groups: main group (n = 60) and control group (n = 60). For 3 weeks patients of both groups received a low-calorie diet (1600 kcal/day). Patients of the main group received VMC, providing an additional intake of vitamins C and E (100-120% RDA), beta-carotene (40% RDA), nicotinamide (38% RDA), pantothenic acid and biotin (60% RDA), vitamins B12, B2 and folic acid (75-83% RDA), vitamins B1 and B6 (160-300% RDA), zinc (100% RDA) and chromium (400% RDA), and also received magnesium (17.7% RDA) and potassium (9.4% RDA) in the form of asparaginate. Body composition, biochemical parameters and micronutrient status (blood serum level of vitamins C, D, B6, B12, folate, potassium, calcium, magnesium, zinc, phosphorus) were evaluated in all patients before and after the 3-week course of diet therapy. After the low-calorie diet therapy average body weight reduction was 4.2 +/- 0.2 kg in the main group, and 4.4 +/- 0.1 kg in the control group, without statistically significant differences between groups. Statistically significant decrease of total cholesterol, triglycerides, and glucose concentration in blood serum was registered in both groups. It should be noted that in the control group glycemia decreased on 1.2 +/- 0.1 mmol/l, while the main group showed a decrease on 1.8 +/- 0.1 (p < 0.05) to normal values (5.4 +/- 0.1 mmol/l). Initial assessment of vitamin and mineral status revealed that most patients were optimal supplied with vitamins and minerals. After the dietotherapy significant increase of vitamin C, 25-hydroxyvitamin D, vitamin B6, folate, vitamin B12, potassium, magnesium, calcium, zinc and phosphorus concentration in blood serum was observed in patients receiving VMC. While in the control group statistically significant decrease of vitamin C, magnesium, zinc and phosphorus concentration in blood serum after the treatment was revealed. The obtained data shows the necessity of addition of the vitamin-mineral complex to the diet of patients with DM2 and obesity.
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...
Krikalev on the aft flight deck with laptop computers
1998-12-10
S88-E-5107 (12-11-98) --- Sergei Krikalev, mission specialist representing the Russian Space Agency (RSA), surrounded by monitors and computers on the flight deck, holds a large camera lens. The photo was taken with an electronic still camera (ESC) at 09:33:22 GMT, Dec. 11.
Port Needs Study (Vessel Traffic Services Benefits). Volume 2: Appendices. Part 2
1991-08-01
their pilots near Execution Rocks. Pilots for Long Island Sound are available from the Constitution State Pilots Association (Hartford, CT) , Northeast...conditions of weather and for dangerous cargoes, and may become a mandatory system in the near future. Recreational craft are asked to monitor VHF-FM...cameras have been installed atop the tower at Yerba Buena Island ( near VTC). One of the cameras is a Low Light Level (LLTV) type. These cameras
A hidden view of wildlife conservation: How camera traps aid science, research and management
O'Connell, Allan F.
2015-01-01
Camera traps — remotely activated cameras with infrared sensors — first gained measurable popularity in wildlife conservation in the early 1990s. Today, they’re used for a variety of activities, from species-specific research to broad-scale inventory or monitoring programs that, in some cases, attempt to detect biodiversity across vast landscapes. As this modern tool continues to evolve, it’s worth examining its uses and benefits for wildlife management and conservation.
Visualizing the history of living spaces.
Ivanov, Yuri; Wren, Christopher; Sorokin, Alexander; Kaur, Ishwinder
2007-01-01
The technology available to building designers now makes it possible to monitor buildings on a very large scale. Video cameras and motion sensors are commonplace in practically every office space, and are slowly making their way into living spaces. The application of such technologies, in particular video cameras, while improving security, also violates privacy. On the other hand, motion sensors, while being privacy-conscious, typically do not provide enough information for a human operator to maintain the same degree of awareness about the space that can be achieved by using video cameras. We propose a novel approach in which we use a large number of simple motion sensors and a small set of video cameras to monitor a large office space. In our system we deployed 215 motion sensors and six video cameras to monitor the 3,000-square-meter office space occupied by 80 people for a period of about one year. The main problem in operating such systems is finding a way to present this highly multidimensional data, which includes both spatial and temporal components, to a human operator to allow browsing and searching recorded data in an efficient and intuitive way. In this paper we present our experiences and the solutions that we have developed in the course of our work on the system. We consider this work to be the first step in helping designers and managers of building systems gain access to information about occupants' behavior in the context of an entire building in a way that is only minimally intrusive to the occupants' privacy.
Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter
NASA Astrophysics Data System (ADS)
Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun
2018-04-01
Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Lai, Yung-Lien; Sheu, Chuen-Jim; Lu, Yi-Fen
2018-06-01
Although numerous public closed-circuit television (CCTV) initiatives have been implemented at varying levels in Taiwan's cities and counties, systematic evaluations of these crime reduction efforts have been largely overlooked. To address this void, a quasi-experimental evaluation research project was designed to assess the effect of police-monitored CCTV on crime reduction in Taipei City for a period of 54 months, including data for both before and after camera installation dates. A total of 40 viewsheds within a 100-m (328 feet) radius were selected as research sites to observe variations in four types of crime incidents that became known to police during the January 2008 to June 2012 period. While crime incidents occurring in both the target and control sites were reduced in frequency after CCTV installation, results derived from time-series analysis indicated that the monitoring had no significant effect on the reduction of property crime incidents with the sole exception of robbery. With respect to the effects of comparing target and control sites, the average Crime Reduction Quotient (CRQ) was 0.36, suggesting that CCTV has an overall marginal yet noteworthy influence. Viewed broadly, however, the police-installed CCTV system in Taipei City did not appear to be as efficient as one would expect. Conversely, cameras installed in some observation sites proved to be significantly more effective than cameras in other sites. As a recommendation, future researchers should identify how particular micro-level attributes may lead to CCTV cameras working more effectively, thereby optimizing location choices where monitoring will prove to be most productive.
Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Robert Lopaka; Kamibayashi, Kevan P.; Antolik, Loren; Werner, Cynthia A.
2015-01-01
SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely “off-the-shelf” components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.
NASA Astrophysics Data System (ADS)
Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Lopaka; Kamibayashi, Kevan; Antolik, Loren; Werner, Cynthia
2015-07-01
SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely ;off-the-shelf; components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.
On-ground and in-orbit characterisation plan for the PLATO CCD normal cameras
NASA Astrophysics Data System (ADS)
Gow, J. P. D.; Walton, D.; Smith, A.; Hailey, M.; Curry, P.; Kennedy, T.
2017-11-01
PLAnetary Transits and Ocillations (PLATO) is the third European Space Agency (ESA) medium class mission in ESA's cosmic vision programme due for launch in 2026. PLATO will carry out high precision un-interrupted photometric monitoring in the visible band of large samples of bright solar-type stars. The primary mission goal is to detect and characterise terrestrial exoplanets and their systems with emphasis on planets orbiting in the habitable zone, this will be achieved using light curves to detect planetary transits. PLATO uses a novel multi- instrument concept consisting of 26 small wide field cameras The 26 cameras are made up of a telescope optical unit, four Teledyne e2v CCD270s mounted on a focal plane array and connected to a set of Front End Electronics (FEE) which provide CCD control and readout. There are 2 fast cameras with high read-out cadence (2.5 s) for magnitude ~ 4-8 stars, being developed by the German Aerospace Centre and 24 normal (N) cameras with a cadence of 25 s to monitor stars with a magnitude greater than 8. The N-FEEs are being developed at University College London's Mullard Space Science Laboratory (MSSL) and will be characterised along with the associated CCDs. The CCDs and N-FEEs will undergo rigorous on-ground characterisation and the performance of the CCDs will continue to be monitored in-orbit. This paper discusses the initial development of the experimental arrangement, test procedures and current status of the N-FEE. The parameters explored will include gain, quantum efficiency, pixel response non-uniformity, dark current and Charge Transfer Inefficiency (CTI). The current in-orbit characterisation plan is also discussed which will enable the performance of the CCDs and their associated N-FEE to be monitored during the mission, this will include measurements of CTI giving an indication of the impact of radiation damage in the CCDs.
Roger Featherstone; Sky Jacobs; Sergio Avila-Villegas; Sandra Doumas
2013-01-01
In September 2011, we initiated a 2-year âcamera trapâ mammal survey in the Greater Oak Flat Watershed near Superior, Arizona. Our survey area covers a total of 6,475 ha. The area surveyed is primarily a mixing zone of upper Sonoran Desert and interior chaparral, with influences from the Madrean vegetation community. Elevations range from 1150 to 1450 m. Ten cameras...
Camera Control and Geo-Registration for Video Sensor Networks
NASA Astrophysics Data System (ADS)
Davis, James W.
With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.
NASA Astrophysics Data System (ADS)
Matsumura, T.; Kamiji, I.; Nakagiri, K.; Nanjo, H.; Nomura, T.; Sasao, N.; Shinkawa, T.; Shiomi, K.
2018-03-01
We have developed a beam-profile monitor (BPM) system to align the collimators for the neutral beam-line at the Hadron Experimental Facility of J-PARC. The system is composed of a phosphor screen and a CCD camera coupled to an image intensifier mounted on a remote control X- Y stage. The design and detailed performance studies of the BPM are presented. The monitor has a spatial resolution of better than 0.6 mm and a deviation from linearity of less than 1%. These results indicate that the BPM system meets the requirements to define collimator-edge positions for the beam-line tuning. Confirmation using the neutral beam for the KOTO experiment is also presented.
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Motion Sickness When Driving With a Head-Slaved Camera System
2003-02-01
YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98
Development of a digital camera tree evaluation system
Neil Clark; Daniel L. Schmoldt; Philip A. Araman
2000-01-01
Within the Strategic Plan for Forest Inventory and Monitoring (USDA Forest Service 1998), there is a call to "conduct applied research in the use of [advanced technology] towards the end of increasing the operational efficiency and effectiveness of our program". The digital camera tree evaluation system is part of that research, aimed at decreasing field...
Measurement of soil color: a comparison between smartphone camera and the Munsell color charts
USDA-ARS?s Scientific Manuscript database
Soil color is one of the most valuable soil properties for assessing and monitoring soil health. Here we present the results of tests of a new soil color app for mobile phones. The comparisons include various smartphones cameras under different natural illumination conditions (sunny and cloudy) and ...
Camera trapping estimates of density and survival of fishers (Martes pennanti)
Mark J. Jordan; Reginald H. Barrett; Kathryn L. Purcell
2011-01-01
Developing efficient monitoring strategies for species of conservation concern is critical to ensuring their persistence. We have developed a method using camera traps to estimate density and survival in mesocarnivores and tested it on a population of fishers Martes pennanti in an area of approximately 300 km2 of the southern...
Herring, G.; Ackerman, Joshua T.; Takekawa, John Y.; Eagles-Smith, Collin A.; Eadie, J.M.
2011-01-01
We evaluated predation on nests and methods to detect predators using a combination of infrared cameras and plasticine eggs at nests of American avocets (Recurvirostra americana) and black-necked stilts (Himantopus mexicanus) in Don Edwards San Francisco Bay National Wildlife Refuge, San Mateo and Santa Clara counties, California. Each technique indicated that predation was prevalent; 59% of monitored nests were depredated. Most identifiable predation (n = 49) was caused by mammals (71%) and rates of predation were similar on avocets and stilts. Raccoons (Procyon lotor) and striped skunks (Mephitis mephitis) each accounted for 16% of predations, whereas gray foxes (Urocyon cinereoargenteus) and avian predators each accounted for 14%. Mammalian predation was mainly nocturnal (mean time, 0051 h ?? 5 h 36 min), whereas most avian predation was in late afternoon (mean time, 1800 h ?? 1 h 26 min). Nests with cameras and plasticine eggs were 1.6 times more likely to be predated than nests where only cameras were used in monitoring. Cameras were associated with lower abandonment of nests and provided definitive identification of predators.
Herring, Garth; Ackerman, Joshua T.; Takekawa, John Y.; Eagles-Smith, Collin A.; Eadie, John M.
2011-01-01
We evaluated predation on nests and methods to detect predators using a combination of infrared cameras and plasticine eggs at nests of American avocets (Recurvirostra americana) and black-necked stilts (Himantopus mexicanus) in Don Edwards San Francisco Bay National Wildlife Refuge, San Mateo and Santa Clara counties, California. Each technique indicated that predation was prevalent; 59% of monitored nests were depredated. Most identifiable predation (n = 49) was caused by mammals (71%) and rates of predation were similar on avocets and stilts. Raccoons (Procyon lotor) and striped skunks (Mephitis mephitis) each accounted for 16% of predations, whereas gray foxes (Urocyon cinereoargenteus) and avian predators each accounted for 14%. Mammalian predation was mainly nocturnal (mean time, 0051 h +/- 5 h 36 min), whereas most avian predation was in late afternoon (mean time, 1800 h +/- 1 h 26 min). Nests with cameras and plasticine eggs were 1.6 times more likely to be predated than nests where only cameras were used in monitoring. Cameras were associated with lower abandonment of nests and provided definitive identification of predators.
Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean
Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave
2009-01-01
Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729
50 CFR 217.55 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... MAMMALS INCIDENTAL TO SPECIFIED ACTIVITIES Taking of Marine Mammals Incidental To Target and Missile... the following monitoring measures: (1) Visual land-based monitoring. (i) Prior to each missile launch... located varying distances from the missile launch site. Each video camera will be set to record a focal...
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... IMPORTING OF MARINE MAMMALS Taking Of Marine Mammals Incidental To Missile Launch Activities from San... monitoring measures: (1) Visual Land-Based Monitoring. (i) Prior to each missile launch, an observer(s) will... from the missile launch site. Each video camera will be set to record a focal subgroup within the...
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... IMPORTING OF MARINE MAMMALS Taking Of Marine Mammals Incidental To Missile Launch Activities from San... monitoring measures: (1) Visual Land-Based Monitoring. (i) Prior to each missile launch, an observer(s) will... from the missile launch site. Each video camera will be set to record a focal subgroup within the...
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
... IMPORTING OF MARINE MAMMALS Taking Of Marine Mammals Incidental To Missile Launch Activities from San... monitoring measures: (1) Visual Land-Based Monitoring. (i) Prior to each missile launch, an observer(s) will... from the missile launch site. Each video camera will be set to record a focal subgroup within the...
Cost-effective handling of digital medical images in the telemedicine environment.
Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel
2007-09-01
This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd
Endoscopic techniques in aesthetic plastic surgery.
McCain, L A; Jones, G
1995-01-01
There has been an explosive interest in endoscopic techniques by plastic surgeons over the past two years. Procedures such as facial rejuvenation, breast augmentation and abdominoplasty are being performed with endoscopic assistance. Endoscopic operations require a complex setup with components such as video camera, light sources, cables and hard instruments. The Hopkins Rod Lens system consists of optical fibers for illumination, an objective lens, an image retrieval system, a series of rods and lenses, and an eyepiece for image collection. Good illumination of the body cavity is essential for endoscopic procedures. Placement of the video camera on the eyepiece of the endoscope gives a clear, brightly illuminated large image on the monitor. The video monitor provides the surgical team with the endoscopic image. It is important to become familiar with the equipment before actually doing cases. Several options exist for staff education. In the operating room the endoscopic cart needs to be positioned to allow a clear unrestricted view of the video monitor by the surgeon and the operating team. Fogging of the endoscope may be prevented during induction by using FREDD (a fog reduction/elimination device) or a warm bath. The camera needs to be white balanced. During the procedure, the nurse monitors the level of dissection and assesses for clogging of the suction.
Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing
NASA Technical Reports Server (NTRS)
Crooke, Julie A.
2003-01-01
The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goggin, L; Kilby, W; Noll, M
2015-06-15
Purpose: A technique using a scintillator-mirror-camera system to measure MLC leakage was developed to provide an efficient alternative to film dosimetry while maintaining high spatial resolution. This work describes the technique together with measurement uncertainties. Methods: Leakage measurements were made for the InCise™ MLC using the Logos XRV-2020A device. For each measurement approximately 170 leakage and background images were acquired using optimized camera settings. Average background was subtracted from each leakage frame before filtering the integrated leakage image to replace anomalous pixels. Pixel value to dose conversion was performed using a calibration image. Mean leakage was calculated within an ROImore » corresponding to the primary beam, and maximum leakage was determined by binning the image into overlapping 1mm x 1mm ROIs. 48 measurements were performed using 3 cameras and multiple MLC-linac combinations in varying beam orientations, with each compared to film dosimetry. Optical and environmental influences were also investigated. Results: Measurement time with the XRV-2020A was 8 minutes vs. 50 minutes using radiochromic film, and results were available immediately. Camera radiation exposure degraded measurement accuracy. With a relatively undamaged camera, mean leakage agreed with film measurement to ≤0.02% in 92% cases, ≤0.03% in 100% (for maximum leakage the values were 88% and 96%) relative to reference open field dose. The estimated camera lifetime over which this agreement is maintained is at least 150 measurements, and can be monitored using reference field exposures. A dependency on camera temperature was identified and a reduction in sensitivity with distance from image center due to optical distortion was characterized. Conclusion: With periodic monitoring of the degree of camera radiation damage, the XRV-2020A system can be used to measure MLC leakage. This represents a significant time saving when compared to the traditional film-based approach without any substantial reduction in accuracy.« less
Camera Traps Can Be Heard and Seen by Animals
Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg
2014-01-01
Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356
Pissible Habitats in the Venusian Environment? How Can the Venus Payload Contribute?
NASA Astrophysics Data System (ADS)
Muller, C. L.; Schulze-Makuch, D.
2005-12-01
The Venusian conditions are unique in the solar system. Venus has a dense CO2 atmosphere, is volcanically active, and has plenty of energy sources such as light, UV radiation, volcanic activity, and chemical energy from atmospheric disequilibria conditions. Its surface conditions are sufficiently hot for sterilization and volcanism injects highly toxic gases which in the absence of unbound water can accumulate in the atmosphere. The Venusian surface is constantly regenerated by volcanism and any possible fossil record from early Venus history in which oceans existed on its surface is almost certainly destroyed. Its upper atmosphere lays bare to solar radiation with only carbon dioxide to act as a confirmed EUV filter. Any possibility of life was considered irrational before extremophile bacteria were discovered in dark undersea hot sulphur rich volcanic vents on Earth. However, some regions of the Venusian clouds might show conditions similar to the earth surface and could be a habitat of thermophilic microbial life similar to the one observed on Earth. A synergy between different instruments of the VENUS-Express payload, the SPICAV spectrometer and the VMC camera in a first step, and the spectrometers VIRTIS and PFS in a second step, will probe the actual environmental conditions of the cloud region. The SPICAV spectrometer, in particular, has three channels including, two infrared AOTF channels and could give access to organic signatures in both the UV and infrared. Given these observations we will be able to analyze whether the environmental conditions of the cloud layer would make it a possible habitat for extant microbial life. The instruments will shed answers to the availability of nutrients, water, types of energy sources, atmospheric dynamics, and organic chemistry.
Gain monitoring of telescope array photomultiplier cameras for the first 4 years of operation
NASA Astrophysics Data System (ADS)
Shin, B. K.; Tokuno, H.; Tsunesada, Y.; Abu-Zayyad, T.; Aida, R.; Allen, M.; Anderson, R.; Azuma, R.; Barcikowski, E.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Cady, R.; Cheon, B. G.; Chiba, J.; Chikawa, M.; Cho, E. J.; Cho, W. R.; Fujii, H.; Fujii, T.; Fukuda, T.; Fukushima, M.; Hanlon, W.; Hayashi, K.; Hayashi, Y.; Hayashida, N.; Hibino, K.; Hiyama, K.; Honda, K.; Iguchi, T.; Ikeda, D.; Ikuta, K.; Inoue, N.; Ishii, T.; Ishimori, R.; Ivanov, D.; Iwamoto, S.; Jui, C. C. H.; Kadota, K.; Kakimoto, F.; Kalashev, O.; Kanbe, T.; Kasahara, K.; Kawai, H.; Kawakami, S.; Kawana, S.; Kido, E.; Kim, H. B.; Kim, H. K.; Kim, J. H.; Kim, J. H.; Kitamoto, K.; Kitamura, S.; Kitamura, Y.; Kobayashi, K.; Kobayashi, Y.; Kondo, Y.; Kuramoto, K.; Kuzmin, V.; Kwon, Y. J.; Lim, S. I.; Machida, S.; Martens, K.; Martineau, J.; Matsuda, T.; Matsuura, T.; Matsuyama, T.; Matthews, J. N.; Myers, I.; Minamino, M.; Miyata, K.; Murano, Y.; Nagasawa, K.; Nagataki, S.; Nakamura, T.; Nam, S. W.; Nonaka, T.; Ogio, S.; Ohnishi, M.; Ohoka, H.; Oki, K.; Oku, D.; Okuda, T.; Oshima, A.; Ozawa, S.; Park, I. H.; Pshirkov, M. S.; Rodriguez, D. C.; Roh, S. Y.; Rubtsov, G.; Ryu, D.; Sagawa, H.; Sakurai, N.; Sampson, A. L.; Scott, L. M.; Shah, P. D.; Shibata, F.; Shibata, T.; Shimodaira, H.; Shin, J. I.; Shirahama, T.; Smith, J. D.; Sokolsky, P.; Sonley, T. J.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Stroman, T.; Suzuki, S.; Takahashi, Y.; Takeda, M.; Taketa, A.; Takita, M.; Tameda, Y.; Tanaka, H.; Tanaka, K.; Tanaka, M.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tkachev, I.; Tomida, T.; Troitsky, S.; Tsutsumi, K.; Tsuyuguchi, Y.; Uchihori, Y.; Udo, S.; Ukai, H.; Vasiloff, G.; Wada, Y.; Wong, T.; Wood, M.; Yamakawa, Y.; Yamane, R.; Yamaoka, H.; Yamazaki, K.; Yang, J.; Yoneda, Y.; Yoshida, S.; Yoshii, H.; Zhou, X.; Zollinger, R.; Zundel, Z.
2014-12-01
The stability of the gain of the photomultiplier (PMT) camera for the Fluorescence Detector (FD) of the Telescope Array experiment was monitored using an 241Am loaded scintillator pulsers (YAP) and a diffused xenon flasher (TXF) for a selected set of 35 PMT-readout channels. From the monitoring of YAP pulses over four years of FD operation, we found slow monotonic drifts of PMT gains at a rate of -1.7 +1.7%/year. An average of the PMT gains over the 35 channels stayed nearly constant with a rate of change measured at -0.01±0.31(stat)±0.21(sys)%/year. No systematic decrease of the PMT gain caused by the night sky background was observed. Monitoring by the TXF also tracked the PMT gain drift of the YAP at 0.88±0.14(stat)%/year.
Displacement and deformation measurement for large structures by camera network
NASA Astrophysics Data System (ADS)
Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu
2014-03-01
A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
The History of the CONCAM Project: All Sky Monitors in the Digital Age
NASA Astrophysics Data System (ADS)
Nemiroff, Robert; Shamir, Lior; Pereira, Wellesley
2018-01-01
The CONtinuous CAMera (CONCAM) project, which ran from 2000 to (about) 2008, consisted of real-time, Internet-connected, fisheye cameras located at major astronomical observatories. At its peak, eleven CONCAMs around the globe monitored most of the night sky, most of the time. Initially designed to search for transients and stellar variability, CONCAMs gained initial notoriety as cloud monitors. As such, CONCAMs made -- and its successors continue to make -- ground-based astronomy more efficient. The original, compact, fisheye-observatory-in-a-suitcase design underwent several iterations, starting with CONCAM0 and with the last version dubbed CONCAM3. Although the CONCAM project itself concluded after centralized funding diminished, today more locally-operated, commercially-designed, CONCAM-like devices operate than ever before. It has even been shown that modern smartphones can operate in a CONCAM-like mode. It is speculated that the re-instatement of better global coordination of current wide-angle sky monitors could lead to better variability monitoring of the brightest stars and transients.
Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin
2018-01-01
This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.
2004-04-01
Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.
Traffic intensity monitoring using multiple object detection with traffic surveillance cameras
NASA Astrophysics Data System (ADS)
Hamdan, H. G. Muhammad; Khalifah, O. O.
2017-11-01
Object detection and tracking is a field of research that has many applications in the current generation with increasing number of cameras on the streets and lower cost for Internet of Things(IoT). In this paper, a traffic intensity monitoring system is implemented based on the Macroscopic Urban Traffic model is proposed using computer vision as its source. The input of this program is extracted from a traffic surveillance camera which has another program running a neural network classification which can identify and differentiate the vehicle type is implanted. The neural network toolbox is trained with positive and negative input to increase accuracy. The accuracy of the program is compared to other related works done and the trends of the traffic intensity from a road is also calculated. relevant articles in literature searches, great care should be taken in constructing both. Lastly the limitation and the future work is concluded.
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
NASA Astrophysics Data System (ADS)
Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.
2013-09-01
Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.
NASA Technical Reports Server (NTRS)
Gander, Philippa H.; Graeber, R. Curtis; Foushee, H. Clayton; Lauber, John K.; Connell, Linda J.
1994-01-01
Seventy-four pilots were monitored before, during, and after 3- or 4-day commercial short-haul trip patterns. The trips studied averaged 10.6 hr of duty per day with 4.5 hr of flight time and 5.5 flight segments. The mean rest period lasted 12.5 hr and occurred progressively earlier across successive days. On trip nights, subjects took longer to fall asleep, slept less, woke earlier, and reported lighter, poorer sleep with more awakenings than on pretrip nights. During layovers, subjective fatigue and negative affect were higher, and positive affect and activation lower, than during pretrip, in-flight, or posttrip. Pilots consumed more caffeine, alcohol, and snacks on trip days than either pretrip or posttrip. Increases in heart rate over mid-cruise were observed during descent and landing, and were greater for the pilot flying. Heart-rate increases were greater during takeoff and descent under instrument meteorological conditions (IMC) than under visual meteorological conditions (VMC). The following would be expected to reduce fatigue in short-haul operations: regulating duty hours, as well as flight hours; scheduling rest periods to begin at the same time of day, or progressively later, across the days of a trip; and educating pilots about alternatives to alcohol as a means of relaxing before sleep.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Aviation spatial orientation in relationship to head position, altitude interpretation, and control.
Smith, D R; Cacioppo, A J; Hinman, G E
1997-06-01
Recently, a visually driven neck reflex was identified as causing head tilt toward the horizon during VMC flight. If this is the case, then pilots orient about a fixed rather than moving horizon, implying current attitude instruments inaccurately present spatial information. The purpose of this study was to determine if the opto-kinetic cervical neck reflex has an effect dependent on passive (autopilot) or active control of the aircraft. Further, findings could help determine if the opto-kinetic cervical reflex is characteristic of other flight crewmembers. There were 16 military pilots who flew two 13-min VMC low-level routes in a large dome flight simulator. Head position in relation to aircraft bank angle was recorded by a head tracker device. During one low-level route, the pilot had a supervisory role as the autopilot flew the aircraft (passive). The other route was flow manually by the pilot (active). Pilots consistently tilted the head to maintain alignment with the horizon. Similar head tilt angles were found in both the active and passive flight phases. However, head tilt had a faster onset rate in the passive condition. Results indicate the opto-kinetic cervical reflex affects pilots while actively flying or in a supervisory role as the autopilot flies. The consistent head tilt angles in both conditions should be considered in attitude indicator, HUD, and HMD designs. Further, results seem to indicate that non-pilot flight crewmembers are affected by the opto-kinetic cervical reflex which should be considered in spatial disorientation and airsickness discussions.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
50 CFR 217.75 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., during, and 2 hours after launch; (2) Ensure a remote camera system will be in place and operating in a..., whenever a new class of rocket is flown from the Kodiak Launch Complex, a real-time sound pressure and... camera system designed to detect pinniped responses to rocket launches for at least the first five...
50 CFR 217.75 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., during, and 2 hours after launch; (2) Ensure a remote camera system will be in place and operating in a..., whenever a new class of rocket is flown from the Kodiak Launch Complex, a real-time sound pressure and... camera system designed to detect pinniped responses to rocket launches for at least the first five...
Big Brother Is Watching: Video Surveillance on Buses
ERIC Educational Resources Information Center
Sloggett, Joel
2009-01-01
Many school districts in North America have adopted policies to permit cameras on their properties and, when needed, on buses used to transport students. With regard to school buses, the camera is typically a tool for gathering information to monitor behavior or to help investigate a complaint about behavior. If a picture is worth a thousand…
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell
2012-01-01
Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…
Using Surveillance Camera Systems to Monitor Public Domains: Can Abuse Be Prevented
2006-03-01
relationship with a 16-year old girl failed. The incident was captured by a New York City Police Department surveillance camera. Although the image...administrators stated that the images recorded were “…nothing more than images of a few bras and panties .”17 The use of CCTV surveillance systems for
NASA Technical Reports Server (NTRS)
Holleman, Elizabeth; Sharp, David; Sheller, Richard; Styron, Jason
2007-01-01
This paper describes the application of a FUR Systems A40M infrared (IR) digital camera for thermal monitoring of a Liquid Oxygen (LOX) and Ethanol bi-propellant Reaction Control Engine (RCE) during Auxiliary Propulsion System (APS) testing at the National Aeronautics & Space Administration's (NASA) White Sands Test Facility (WSTF) near Las Cruces, New Mexico. Typically, NASA has relied mostly on the use of ThermoCouples (TC) for this type of thermal monitoring due to the variability of constraints required to accurately map rapidly changing temperatures from ambient to glowing hot chamber material. Obtaining accurate real-time temperatures in the JR spectrum is made even more elusive by the changing emissivity of the chamber material as it begins to glow. The parameters evaluated prior to APS testing included: (1) remote operation of the A40M camera using fiber optic Firewire signal sender and receiver units; (2) operation of the camera inside a Pelco explosion proof enclosure with a germanium window; (3) remote analog signal display for real-time monitoring; (4) remote digital data acquisition of the A40M's sensor information using FUR's ThermaCAM Researcher Pro 2.8 software; and (5) overall reliability of the system. An initial characterization report was prepared after the A40M characterization tests at Marshall Space Flight Center (MSFC) to document controlled heat source comparisons to calibrated TCs. Summary IR digital data recorded from WSTF's APS testing is included within this document along with findings, lessons learned, and recommendations for further usage as a monitoring tool for the development of rocket engines.
Evaluation of a video image detection system : final report.
DOT National Transportation Integrated Search
1994-05-01
A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...
Closed circuit TV system monitors welding operations
NASA Technical Reports Server (NTRS)
Gilman, M.
1967-01-01
TV camera system that has a special vidicon tube with a gradient density filter is used in remote monitoring of TIG welding of stainless steel. The welding operations involve complex assembly welding tools and skates in areas of limited accessibility.
Incidents Prediction in Road Junctions Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed
2018-05-01
The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.
Rydning, A; Berstad, A; Berstad, T; Hertzenberg, L
1985-04-01
The effect of physiological doses of guar gum (Guarem), 5 g, and fiber-enriched wheat bran (Fiberform), 10.5 g, on gastric emptying was studied by two different methods in healthy subjects: by a simple isotope localization monitor placed over the upper part of the abdomen and by gamma camera. The fiber preparations were added to a semisolid meal consisting of wheatmeal porridge and juice, using technetium-99 DTPA as a marker. The gamma camera showed no effect of fiber on gastric emptying. The isotope localization monitor, however, indicated that Fiberform prevented a postprandial accumulation of the meal within the upper part of the stomach. The simple isotope localization monitor cannot be recommended for measurements of gastric emptying.
Internal corrosion monitoring of subsea oil and gas production equipment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joosten, M.W.; Fischer, K.P.; Strommen, R.
1995-04-01
Nonintrusive techniques will dominate subsea corrosion monitoring compared with the intrusive methods because such methods do not interfere with pipeline operations. The long-term reliability of the nonintrusive techniques in general is considered to be much better than that of intrusive-type probes. The nonintrusive techniques based on radioactive tracers (TLA, NA) and FSM and UT are expected to be the main types of subsea corrosion monitoring equipment in the coming years. Available techniques that could be developed specifically for subsea applications are: electrochemical noise, corrosion potentials (using new types of reference electrodes), multiprobe system for electrochemical measurements, and video camera inspectionmore » (mini-video camera with light source). The following innovative techniques have potential but need further development: ion selective electrodes, radioactive tracers, and Raman spectroscopy.« less
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; Macleod, Todd; Gagliano, Larry
2015-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging
NASA Astrophysics Data System (ADS)
Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.
2006-12-01
In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; MacLeod, Todd; Gagliano, Larry
2016-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
NASA Astrophysics Data System (ADS)
Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.
2008-12-01
Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.
Major, J.J.; Dzurisin, D.; Schilling, S.P.; Poland, Michael P.
2009-01-01
We present an analysis of lava dome growth during the 2004–2008 eruption of Mount St. Helens using oblique terrestrial images from a network of remotely placed cameras. This underutilized monitoring tool augmented more traditional monitoring techniques, and was used to provide a robust assessment of the nature, pace, and state of the eruption and to quantify the kinematics of dome growth. Eruption monitoring using terrestrial photography began with a single camera deployed at the mouth of the volcano's crater during the first year of activity. Analysis of those images indicates that the average lineal extrusion rate decayed approximately logarithmically from about 8 m/d to about 2 m/d (± 2 m/d) from November 2004 through December 2005, and suggests that the extrusion rate fluctuated on time scales of days to weeks. From May 2006 through September 2007, imagery from multiple cameras deployed around the volcano allowed determination of 3-dimensional motion across the dome complex. Analysis of the multi-camera imagery shows spatially differential, but remarkably steady to gradually slowing, motion, from about 1–2 m/d from May through October 2006, to about 0.2–1.0 m/d from May through September 2007. In contrast to the fluctuations in lineal extrusion rate documented during the first year of eruption, dome motion from May 2006 through September 2007 was monotonic (± 0.10 m/d) to gradually slowing on time scales of weeks to months. The ability to measure spatial and temporal rates of motion of the effusing lava dome from oblique terrestrial photographs provided a significant, and sometimes the sole, means of identifying and quantifying dome growth during the eruption, and it demonstrates the utility of using frequent, long-term terrestrial photography to monitor and study volcanic eruptions.
Senthil Kumar, S; Suresh Babu, S S; Anand, P; Dheva Shantha Kumari, G
2012-06-01
The purpose of our study was to fabricate in-house web-camera based automatic continuous patient movement monitoring device and control the movement of the patients during EXRT. Web-camera based patient movement monitoring device consists of a computer, digital web-camera, mounting system, breaker circuit, speaker, and visual indicator. The computer is used to control and analyze the patient movement using indigenously developed software. The speaker and the visual indicator are placed in the console room to indicate the positional displacement of the patient. Studies were conducted on phantom and 150 patients with different types of cancers. Our preliminary clinical results indicate that our device is highly reliable and can accurately report smaller movements of the patients in all directions. The results demonstrated that the device was able to detect patient's movements with the sensitivity of about 1 mm. When a patient moves, the receiver activates the circuit; an audible warning sound will be produced in the console. Through real-time measurements, an audible alarm can alert the radiation technologist to stop the treatment if the user defined positional threshold is violated. Simultaneously, the electrical circuit to the teletherapy machine will be activated and radiation will be halted. Patient's movement during the course for radiotherapy was studied. The beam is halted automatically when the threshold level of the system is exceeded. By using the threshold provided in the system, it is possible to monitor the patient continuously with certain fixed limits. An additional benefit is that it has reduced the tension and stress of a treatment team associated with treating patients who are not immobilized. It also enables the technologists to do their work more efficiently, because they don't have to continuously monitor patients with as much scrutiny as was required. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Buongiorno, M. F.; Musacchio, M.; Silvestri, M.; Vilardo, G.; Sansivero, F.; caPUTO, T.; bellucci Sessa, E.; Pieri, D. C.
2017-12-01
Current satellite missions providing imagery in the TIR region at high spatial resolution offer the possibility to estimate the surface temperature in volcanic area contributing in understanding the ongoing phenomena to mitigate the volcanic risk when population are exposed. The Campi Flegrei volcanic area (Italy) is part of the Napolitan volcanic district and its monitored by INGV ground networks including thermal cameras. TIRS on LANDSAT and ASTER on NASA-TERRA provide thermal IR channels to monitor the evolution of the surface temperatures on Campi Flegrei area. The spatial resolution of the TIR data is 100 m for LANDSAT8 and 90 m for ASTER, temporal resolution is 16 days for both satellites. TIRNet network has been developed by INGV for long-term volcanic surveillance of the Flegrei Fields through the acquisition of thermal infrared images. The system is currently comprised of 5 permanent stations equipped with FLIR A645SC thermo cameras with a 640x480 resolution IR sensor. To improve the systematic use of satellite data in the monitor procedures of Volcanic Observatories a suitable integration and validation strategy is needed, also considering that current satellite missions do not provide TIR data with optimal characteristics to observe small thermal anomalies that may indicate changes in the volcanic activity. The presented procedure has been applied to the analysis of Solfatara Crater and is based on 2 different steps: 1) parallel processing chains to produce ground temperature data both from satellite and ground cameras; 2) data integration and comparison. The ground cameras images generally correspond to views of portion of the crater slopes characterized by significant thermal anomalies due to fumarole fields. In order to compare the satellite and ground cameras it has been necessary to take into account the observation geometries. All thermal images of the TIRNet have been georeferenced to the UTM WGS84 system, a regular grid of 30x30 meters has been created to select polygonal areas corresponding only to the cells containing the georeferenced TIR images acquired by different TIRnet stations. Preliminary results of this integration approach has been analyzed in order to produce systematic reports to the Italian Civil Protection for the Napolitan Volcanoes.
Hames, T K; Condon, B R; Fleming, J S; Phillips, G; Holdstock, G; Smith, C L; Howlett, P J; Ackery, D
1984-07-01
We have compared the 7-day retention of the radioisotope bile salt analogue SeHCAT (75Se-23-selena-25-homotaurocholate), by whole body counting and by uncollimated gamma camera measurement, in phantoms and in 25 patients with inflammatory bowel disease. The results correlate with a linear correlation coefficient of 0.96. An uncollimated gamma camera can be used to assess bile acid malabsorption when a whole body radioactivity monitor is not available.
NGEE Arctic Zero Power Warming PhenoCamera Images, Barrow, Alaska, 2016
Shawn Serbin; Andrew McMahon; Keith Lewin; Kim Ely; Alistair Rogers
2016-11-14
StarDot NetCam SC pheno camera images collected from the top of the Barrow, BEO Sled Shed. The camera was installed to monitor the BNL TEST group's prototype ZPW (Zero Power Warming) chambers during the growing season of 2016 (including early spring and late fall). Images were uploaded to the BNL FTP server every 10 minutes and renamed with the date and time of the image. See associated data "Zero Power Warming (ZPW) Chamber Prototype Measurements, Barrow, Alaska, 2016" http://dx.doi.org/10.5440/1343066.
Rugged Video System For Inspecting Animal Burrows
NASA Technical Reports Server (NTRS)
Triandafils, Dick; Maples, Art; Breininger, Dave
1992-01-01
Video system designed for examining interiors of burrows of gopher tortoises, 5 in. (13 cm) in diameter or greater, to depth of 18 ft. (about 5.5 m), includes video camera, video cassette recorder (VCR), television monitor, control unit, and power supply, all carried in backpack. Polyvinyl chloride (PVC) poles used to maneuver camera into (and out of) burrows, stiff enough to push camera into burrow, but flexible enough to bend around curves. Adult tortoises and other burrow inhabitants observable, young tortoises and such small animals as mice obscured by sand or debris.
Microprocessor-controlled wide-range streak camera
NASA Astrophysics Data System (ADS)
Lewis, Amy E.; Hollabaugh, Craig
2006-08-01
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.
VizieR Online Data Catalog: XMM-Newton and Chandra monitoring of Sgr A* (Ponti+, 2015)
NASA Astrophysics Data System (ADS)
Ponti, G.; de, Marco B.; Morris, M. R.; Merloni, A.; Munoz-Darias, T.; Clavel, M.; Haggard, D.; Zhang, S.; Nandra, K.; Gillessen, S.; Mori, K.; Neilsen, J.; Rea, N.; Degenaar, N.; Terrier, R.; Goldwurm, A.
2018-01-01
As of 2014 November 11 the XMM-Newton archive contains 37 public observations that can be used for our analysis of Sgr A*. In addition, we consider four new observations aimed at monitoring the interaction between the G2 object and Sgr A*, performed in fall 2014 (see Table A4). A total of 41 XMM-Newton data sets are considered in this work. All the 46 Chandra observations accumulated between 1999 and 2011 and analysed here are obtained with the ACIS-I camera without any gratings on (see Table A1). From 2012 onwards, data from the ACIS-S camera were also employed. The 2012 Chandra "X-ray Visionary Project" (XVP) is composed of 38 High-Energy Transmission Grating (HETG) observations with the ACIS-S camera at the focus (Nowak et al. 2012ApJ...759...95N; Neilsen et al. 2013ApJ...774...42N; 2015ApJ...799..199N; Wang et al. 2013Sci...341..981W; see Table A2). The first two observations of the 2013 monitoring campaign were performed with the ACIS-I instrument, while the ACIS-S camera was employed in all the remaining observations, after the outburst of SGR J1745-2900 on 2013 April 25. Three observations between 2013 May and July were performed with the HETG on, while all the remaining ones do not employ any gratings (see Table A2). (4 data files).
A 3D photographic capsule endoscope system with full field of view
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng
2013-09-01
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
The Juno Radiation Monitoring (RM) Investigation
NASA Astrophysics Data System (ADS)
Becker, H. N.; Alexander, J. W.; Adriani, A.; Mura, A.; Cicchetti, A.; Noschese, R.; Jørgensen, J. L.; Denver, T.; Sushkova, J.; Jørgensen, A.; Benn, M.; Connerney, J. E. P.; Bolton, S. J.; Allison, J.; Watts, S.; Adumitroaie, V.; Manor-Chapman, E. A.; Daubar, I. J.; Lee, C.; Kang, S.; McAlpine, W. J.; Di Iorio, T.; Pasqui, C.; Barbis, A.; Lawton, P.; Spalsbury, L.; Loftin, S.; Sun, J.
2017-11-01
The Radiation Monitoring Investigation of the Juno Mission will actively retrieve and analyze the noise signatures from penetrating radiation in the images of Juno's star cameras and science instruments at Jupiter. The investigation's objective is to profile Jupiter's >10-MeV electron environment in regions of the Jovian magnetosphere which today are still largely unexplored. This paper discusses the primary instruments on Juno which contribute to the investigation's data suite, the measurements of camera noise from penetrating particles, spectral sensitivities and measurement ranges of the instruments, calibrations performed prior to Juno's first science orbit, and how the measurements may be used to infer the external relativistic electron environment.
Light-pollution measurement with the Wide-field all-sky image analyzing monitoring system
NASA Astrophysics Data System (ADS)
Vítek, S.
2017-07-01
The purpose of this experiment was to measure light pollution in the capital of Czech Republic, Prague. As a measuring instrument is used calibrated consumer level digital single reflex camera with IR cut filter, therefore, the paper reports results of measuring and monitoring of the light pollution in the wavelength range of 390 - 700 nm, which most affects visual range astronomy. Combining frames of different exposure times made with a digital camera coupled with fish-eye lens allow to create high dynamic range images, contain meaningful values, so such a system can provide absolute values of the sky brightness.
Tsiourlis, Georgios; Andreadakis, Stamatis; Konstantinidis, Pavlos
2009-01-01
The SITHON system, a fully wireless optical imaging system, integrating a network of in-situ optical cameras linking to a multi-layer GIS database operated by Control Operating Centres, has been developed in response to the need for early detection, notification and monitoring of forest fires. This article presents in detail the architecture and the components of SITHON, and demonstrates the first encouraging results of an experimental test with small controlled fires over Sithonia Peninsula in Northern Greece. The system has already been scheduled to be installed in some fire prone areas of Greece. PMID:22408536
Proposed patient motion monitoring system using feature point tracking with a web camera.
Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi
2017-12-01
Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.
Grams, Paul E.; Tusso, Robert B.; Buscombe, Daniel
2018-02-27
Automated camera systems deployed at 43 remote locations along the Colorado River corridor in Grand Canyon National Park, Arizona, are used to document sandbar erosion and deposition that are associated with the operations of Glen Canyon Dam. The camera systems, which can operate independently for a year or more, consist of a digital camera triggered by a separate data controller, both of which are powered by an external battery and solar panel. Analysis of images for categorical changes in sandbar size show deposition at 50 percent or more of monitoring sites during controlled flood releases done in 2012, 2013, 2014, and 2016. The images also depict erosion of sandbars and show that erosion rates were highest in the first 3 months following each controlled flood. Erosion rates were highest in 2015, the year of highest annual dam release volume. Comparison of the categorical estimates of sandbar change agree with sandbar change (erosion or deposition) measured by topographic surveys in 76 percent of cases evaluated. A semiautomated method for quantifying changes in sandbar area from the remote-camera images by rectifying the oblique images and segmenting the sandbar from the rest of the image is presented. Calculation of sandbar area by this method agrees with sandbar area determined by topographic survey within approximately 8 percent and allows quantification of sandbar area monthly (or more frequently).
Integrated multi sensors and camera video sequence application for performance monitoring in archery
NASA Astrophysics Data System (ADS)
Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali
2018-03-01
This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.
Using Wide-Field Meteor Cameras to Actively Engage Students in Science
NASA Astrophysics Data System (ADS)
Kuehn, D. M.; Scales, J. N.
2012-08-01
Astronomy has always afforded teachers an excellent topic to develop students' interest in science. New technology allows the opportunity to inexpensively outfit local school districts with sensitive, wide-field video cameras that can detect and track brighter meteors and other objects. While the data-collection and analysis process can be mostly automated by software, there is substantial human involvement that is necessary in the rejection of spurious detections, in performing dynamics and orbital calculations, and the rare recovery and analysis of fallen meteorites. The continuous monitoring allowed by dedicated wide-field surveillance cameras can provide students with a better understanding of the behavior of the night sky including meteors and meteor showers, stellar motion, the motion of the Sun, Moon, and planets, phases of the Moon, meteorological phenomena, etc. Additionally, some students intrigued by the possibility of UFOs and "alien visitors" may find that actual monitoring data can help them develop methods for identifying "unknown" objects. We currently have two ultra-low light-level surveillance cameras coupled to fish-eye lenses that are actively obtaining data. We have developed curricula suitable for middle or high school students in astronomy and earth science courses and are in the process of testing and revising our materials.
Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.
Shieh, Wann-Yun; Huang, Ju-Chin
2012-09-01
For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Development of on-line laser power monitoring system
NASA Astrophysics Data System (ADS)
Ding, Chien-Fang; Lee, Meng-Shiou; Li, Kuan-Ming
2016-03-01
Since the laser was invented, laser has been applied in many fields such as material processing, communication, measurement, biomedical engineering, defense industries and etc. Laser power is an important parameter in laser material processing, i.e. laser cutting, and laser drilling. However, the laser power is easily affected by the environment temperature, we tend to monitor the laser power status, ensuring there is an effective material processing. Besides, the response time of current laser power meters is too long, they cannot measure laser power accurately in a short time. To be more precisely, we can know the status of laser power and help us to achieve an effective material processing at the same time. To monitor the laser power, this study utilize a CMOS (Complementary metal-oxide-semiconductor) camera to develop an on-line laser power monitoring system. The CMOS camera captures images of incident laser beam after it is split and attenuated by beam splitter and neutral density filter. By comparing the average brightness of the beam spots and measurement results from laser power meter, laser power can be estimated. Under continuous measuring mode, the average measuring error is about 3%, and the response time is at least 3.6 second shorter than thermopile power meters; under trigger measuring mode which enables the CMOS camera to synchronize with intermittent laser output, the average measuring error is less than 3%, and the shortest response time is 20 millisecond.
Baptiste Dafflon; Rusen Oktem; John Peterson; Craig Ulrich; Anh Phuong Tran; Vladimir Romanovsky; Susan Hubbard
2017-05-10
The dataset contains measurements obtained through electrical resistivity tomography (ERT) to monitor soil properties, pole-mounted optical cameras to monitor vegetation dynamics, point probes to measure soil temperature, and periodic manual measurements of thaw layer thickness, snow thickness and soil dielectric permittivity.
Frederick C. Hall
2000-01-01
Ground-based photo monitoring is repeat photography using ground-based cameras to document change in vegetation or soil. Assume those installing the photo location will not be the ones re-photographing it. This requires a protocol that includes: (1) a map to locate the monitoring area, (2) another map diagramming the photographic layout, (3) type and make of film such...
ERIC Educational Resources Information Center
Hayes, John; Pulliam, Robert
A video performance monitoring system was developed by the URS/Matrix Company, under contract to the USAF Human Resources Laboratory and was evaluated experimentally in three technical training settings. Using input from 1 to 8 video cameras, the system provided a flexible combination of signal processing, direct monitor, recording and replay…
2012-01-01
Operations 5-12 5.12 Approach to Land 5-12 5.13 Take Off 5-13 5.14 Under Slung Loads 5-13 5.14.1 USL Techniques 5-13 5.15 Formation Procedures 5-13 5.16...UK United Kingdom US United States USAARL Unites States Army Aeromedical Research Laboratory USL Under Slung Load VMC Visual Meteorological...associated with Under Slung Load ( USL ) operations in recirculation are exacerbated by the need to operate in the hover for extended periods. However it is
Recent progress in the theoretical modelling of Cepheids and RR Lyrae stars
NASA Astrophysics Data System (ADS)
Marconi, Marcella
2017-09-01
Cepheids and RR Lyrae are among the most important primary distance indicators to calibrate the extragalactic distance ladder and excellent stellar population tracers, for Population I and Population II, respectively. In this paper I first mention some recent theoretical studies of Cepheids and RR Lyrae obtained with different theoretical tools. Then I focus the attention on new results based on nonlinear convective pulsation models in the context of some international projects, including VMC@VISTA and the Gaia collaboration. The open problems for both Cepheids and RR Lyrae are briefly discussed together with some challenging future application.
2015-10-01
clinton.murray@us.army.mil 16 Gregory Altman Surgeon AGY altmangregory@hotmail.com; GALTMAN@wpahs.org Andrew Schmidt Surgeon HCM schmi115...CMC 601 209 180 48 1 16 1 UWA 289 224 115 48 3 0 1 VMC 477 159 56 38 1 3 1 HRV 388 - 52 33 3 0 4 AGY 448 - 38 20 2 4 1 MTH 398 154 67 17 2 1 1 YRK...Number Number Number Number Certified Screened Screened Eligible Enrolled Refused Not Enrolled ALL 1304 733 (56%) 427 (58%) 97 (13%) 209 (29%) AGY 448
Security warning system monitors up to fifteen remote areas simultaneously
NASA Technical Reports Server (NTRS)
Fusco, R. C.
1966-01-01
Security warning system consisting of 15 television cameras is capable of monitoring several remote or unoccupied areas simultaneously. The system uses a commutator and decommutator, allowing time-multiplexed video transmission. This security system could be used in industrial and retail establishments.
Monitoring tigers with confidence.
Linkie, Matthew; Guillera-Arroita, Gurutzeta; Smith, Joseph; Rayan, D Mark
2010-12-01
With only 5% of the world's wild tigers (Panthera tigris Linnaeus, 1758) remaining since the last century, conservationists urgently need to know whether or not the management strategies currently being employed are effectively protecting these tigers. This knowledge is contingent on the ability to reliably monitor tiger populations, or subsets, over space and time. In the this paper, we focus on the 2 seminal methodologies (camera trap and occupancy surveys) that have enabled the monitoring of tiger populations with greater confidence. Specifically, we: (i) describe their statistical theory and application in the field; (ii) discuss issues associated with their survey designs and state variable modeling; and, (iii) discuss their future directions. These methods have had an unprecedented influence on increasing statistical rigor within tiger surveys and, also, surveys of other carnivore species. Nevertheless, only 2 published camera trap studies have gone beyond single baseline assessments and actually monitored population trends. For low density tiger populations (e.g. <1 adult tiger/100 km(2)) obtaining sufficient precision for state variable estimates from camera trapping remains a challenge because of insufficient detection probabilities and/or sample sizes. Occupancy surveys have overcome this problem by redefining the sampling unit (e.g. grid cells and not individual tigers). Current research is focusing on developing spatially explicit capture-mark-recapture models and estimating abundance indices from landscape-scale occupancy surveys, as well as the use of genetic information for identifying and monitoring tigers. The widespread application of these monitoring methods in the field now enables complementary studies on the impact of the different threats to tiger populations and their response to varying management intervention. © 2010 ISZS, Blackwell Publishing and IOZ/CAS.
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
Time-lapse cameras as an aid in studying grizzly bears in northwest Wyoming
Ball, Ronald E.
1980-01-01
Time-lapse cameras were effective for gathering limited distribution and population data on grizzly bears (Ursus arctos) and black bears (Ursus americanus) in northwest Wyoming. Thirty-six stations, each consisting of a camera and a lure, were monitored for 551 camera-days; 83 rolls of film were exposed. Five different lures were tested. Thirty-one bears (5 grizzly, 25 black, 1 unknown bear) were identified at 15 stations. Young:adult and young:female ratios observed (0.4 and 1.5 for black bears and 0.7 and 2.0 for grizzlies) corresponded well with those of other researchers in the region. One sighting recorded on film extended the known range of the grizzly bear in the Shoshone National Forest.
Design of the high-resolution soft X-ray imaging system on the Joint Texas Experimental Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jianchao; Ding, Yonghua, E-mail: yhding@mail.hust.edu.cn; Zhang, Xiaoqing
2014-11-15
A new soft X-ray diagnostic system has been designed on the Joint Texas Experimental Tokamak (J-TEXT) aiming to observe and survey the magnetohydrodynamic (MHD) activities. The system consists of five cameras located at the same toroidal position. Each camera has 16 photodiode elements. Three imaging cameras view the internal plasma region (r/a < 0.7) with a spatial resolution about 2 cm. By tomographic method, heat transport outside from the 1/1 mode X-point during the sawtooth collapse is found. The other two cameras with a higher spatial resolution 1 cm are designed for monitoring local MHD activities respectively in plasma coremore » and boundary.« less
An autonomous sensor module based on a legacy CCTV camera
NASA Astrophysics Data System (ADS)
Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.
2016-10-01
A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.
Microprocessor-controlled, wide-range streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amy E. Lewis, Craig Hollabaugh
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less
Lessons from UNSCOM and IAEA regarding remote monitoring and air sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupree, S.A.
1996-01-01
In 1991, at the direction of the United Nations Security Council, UNSCOM and IAEA developed plans for On-going Monitoring and Verification (OMV) in Iraq. The plans were accepted by the Security Council and remote monitoring and atmospheric sampling equipment has been installed at selected sites in Iraq. The remote monitoring equipment consists of video cameras and sensors positioned to observe equipment or activities at sites that could be used to support the development or manufacture of weapons of mass destruction, or long-range missiles. The atmospheric sampling equipment provides unattended collection of chemical samples from sites that could be used tomore » support the development or manufacture of chemical weapon agents. To support OMV in Iraq, UNSCOM has established the Baghdad Monitoring and Verification Centre. Imagery from the remote monitoring cameras can be accessed in near-real time from the Centre through RIF communication links with the monitored sites. The OMV program in Iraq has implications for international cooperative monitoring in both global and regional contexts. However, monitoring systems such as those used in Iraq are not sufficient, in and of themselves, to guarantee the absence of prohibited activities. Such systems cannot replace on-site inspections by competent, trained inspectors. However, monitoring similar to that used in Iraq can contribute to openness and confidence building, to the development of mutual trust, and to the improvement of regional stability.« less
Automatic fog detection for public safety by using camera images
NASA Astrophysics Data System (ADS)
Pagani, Giuliano Andrea; Roth, Martin; Wauben, Wiel
2017-04-01
Fog and reduced visibility have considerable impact on the performance of road, maritime, and aeronautical transportation networks. The impact ranges from minor delays to more serious congestions or unavailability of the infrastructure and can even lead to damage or loss of lives. Visibility is traditionally measured manually by meteorological observers using landmarks at known distances in the vicinity of the observation site. Nowadays, distributed cameras facilitate inspection of more locations from one remote monitoring center. The main idea is, however, still deriving the visibility or presence of fog by an operator judging the scenery and the presence of landmarks. Visibility sensors are also used, but they are rather costly and require regular maintenance. Moreover, observers, and in particular sensors, give only visibility information that is representative for a limited area. Hence the current density of visibility observations is insufficient to give detailed information on the presence of fog. Cameras are more and more deployed for surveillance and security reasons in cities and for monitoring traffic along main transportation ways. In addition to this primary use of cameras, we consider cameras as potential sensors to automatically identify low visibility conditions. The approach that we follow is to use machine learning techniques to determine the presence of fog and/or to make an estimation of the visibility. For that purpose a set of features are extracted from the camera images such as the number of edges, brightness, transmission of the image dark channel, fractal dimension. In addition to these image features, we also consider meteorological variables such as wind speed, temperature, relative humidity, and dew point as additional features to feed the machine learning model. The results obtained with a training and evaluation set consisting of 10-minute sampled images for two KNMI locations over a period of 1.5 years by using decision trees methods to classify the dense fog conditions (i.e., visibility below 250 meters) show promising results (in terms of accuracy and type I and II errors). We are currently extending the approach to images obtained with traffic-monitoring cameras along highways. This is a first step to reach a solution that is closer to an operational artificial intelligence application for automatic fog alarm signaling for public safety.
Monitoring and Modeling the Impact of Grazers Using Visual, Remote and Traditional Field Techniques
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Marshall, I. W.; Rose, R. J.
2009-04-01
The relationship between wild and domestic animals and the landscape they graze upon is important to soil erosion studies because they are a strong influence on vegetation cover (a key control on the rate of overland flow runoff), and also because the grazers contribute directly to sediment transport via carriage and indirectly by exposing fresh soil by trampling and burrowing/excavating. Quantifying the impacts of these effects on soil erosion and their dependence on grazing intensity, in complex semi-natural habitats has proved difficult. This is due to lack of manpower to collect sufficient data and weak standardization of data collection between observers. The advent of cheaper and more sophisticated digital camera technology and GPS tracking devices has lead to an increase in the amount of habitat monitoring information that is being collected. We report on the use of automated trail cameras to continuously capture images of grazer (sheep, rabbits, deer) activity in a variety of habitats at the Moor House nature reserve in northern England. As well as grazer activity these cameras also give valuable information on key climatic soil erosion factors such as snow, rain and wind and plant growth and thus allow the importance of a range of grazer activities and the grazing intensity to be estimated. GPS collars and more well established survey methods (erosion monitoring, dung counting and vegetation surveys) are being used to generate a detailed representation of land usage and plan camera siting. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the data processing time and increase focus on important subsets in the collected data. We also present a land usage model that estimates grazing intensity, grazer behaviours and their impact on soil coverage at sites where cameras have not been deployed, based on generalising from camera sites to other sites with similar morphology and ecology, where the GPS tracks indicate similar levels of grazer activity. This is ongoing research with results continually feeding back to the data collection regimes in terms of camera placement. This all makes a valuable contribution to the debate about the dynamics of grazing behaviour and its impact on soil erosion.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, D.P.
1998-05-12
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
NASA Astrophysics Data System (ADS)
Wang, Sheng; Bandini, Filippo; Jakobsen, Jakob; Zarco-Tejada, Pablo J.; Köppl, Christian Josef; Haugård Olesen, Daniel; Ibrom, Andreas; Bauer-Gottwein, Peter; Garcia, Monica
2017-04-01
Unmanned Aerial Systems (UAS) can collect optical and thermal hyperspatial (<1m) imagery with low cost and flexible revisit times regardless of cloudy conditions. The reflectance and radiometric temperature signatures of the land surface, closely linked with the vegetation structure and functioning, are already part of models to predict Evapotranspiration (ET) and Gross Primary Productivity (GPP) from satellites. However, there remain challenges for an operational monitoring using UAS compared to satellites: the payload capacity of most commercial UAS is less than 2 kg, but miniaturized sensors have low signal to noise ratios and small field of view requires mosaicking hundreds of images and accurate orthorectification. In addition, wind gusts and lower platform stability require appropriate geometric and radiometric corrections. Finally, modeling fluxes on days without images is still an issue for both satellite and UAS applications. This study focuses on designing an operational UAS-based monitoring system including payload design, sensor calibration, based on routine collection of optical and thermal images in a Danish willow field to perform a joint monitoring of ET and GPP dynamics over continuous time at daily time steps. The payload (<2 kg) consists of a multispectral camera (Tetra Mini-MCA6), a thermal infrared camera (FLIR Tau 2), a digital camera (Sony RX-100) used to retrieve accurate digital elevation models (DEMs) for multispectral and thermal image orthorectification, and a standard GNSS single frequency receiver (UBlox) or a real time kinematic double frequency system (Novatel Inc. flexpack6+OEM628). Geometric calibration of the digital and multispectral cameras was conducted to recover intrinsic camera parameters. After geometric calibration, accurate DEMs with vertical errors about 10cm could be retrieved. Radiometric calibration for the multispectral camera was conducted with an integrating sphere (Labsphere CSTM-USS-2000C) and the laboratory calibration showed that the camera measured radiance had a bias within ±4.8%. The thermal camera was calibrated using a black body at varying target and ambient temperatures and resulted in laboratory accuracy with RMSE of 0.95 K. A joint model of ET and GPP was applied using two parsimonious, physiologically based models, a modified version of the Priestley-Taylor Jet Propulsion Laboratory model (Fisher et al., 2008; Garcia et al., 2013) and a Light Use Efficiency approach (Potter et al., 1993). Both models estimate ET and GPP under optimum potential conditions down-regulated by the same biophysical constraints dependent on remote sensing and atmospheric data to reflect multiple stresses. Vegetation indices were calculated from the multispectral data to assess vegetation conditions, while thermal infrared imagery was used to compute a thermal inertia index to infer soil moisture constraints. To interpolate radiometric temperature between flights, a prognostic Surface Energy Balance model (Margulis et al., 2001) based on the force-restore method was applied in a data assimilation scheme to obtain continuous ET and GPP fluxes. With this operational system, regular flight campaigns with a hexacopter (DJI S900) have been conducted in a Danish willow flux site (Risø) over the 2016 growing season. The observed energy, water and carbon fluxes from the Risø eddy covariance flux tower were used to validate the model simulation. This UAS monitoring system is suitable for agricultural management and land-atmosphere interaction studies.
NASA Astrophysics Data System (ADS)
Robert, K.; Matabos, M.; Sarrazin, J.; Sarradin, P.; Lee, R. W.; Juniper, K.
2010-12-01
Hydrothermal vent environments are among the most dynamic benthic habitats in the ocean. The relative roles of physical and biological factors in shaping vent community structure remain unclear. Undersea cabled observatories offer the power and bandwidth required for high-resolution, time-series study of the dynamics of vent communities and the physico-chemical forces that influence them. The NEPTUNE Canada cabled instrument array at the Endeavour hydrothermal vents provides a unique laboratory for researchers to conduct long-term, integrated studies of hydrothermal vent ecosystem dynamics in relation to environmental variability. Beginning in September-October 2010, NEPTUNE Canada (NC) will be deploying a multi-disciplinary suite of instruments on the Endeavour Segment of the Juan de Fuca Ridge. Two camera and sensor systems will be used to study ecosystem dynamics in relation to hydrothermal discharge. These studies will make use of new experimental protocols for time-series observations that we have been developing since 2008 at other observatory sites connected to the VENUS and NC networks. These protocols include sampling design, camera calibration (i.e. structure, position, light, settings) and image analysis methodologies (see communication by Aron et al.). The camera systems to be deployed in the Main Endeavour vent field include a Sidus high definition video camera (2010) and the TEMPO-mini system (2011), designed by IFREMER (France). Real-time data from three sensors (O2, dissolved Fe, temperature) integrated with the TEMPO-mini system will enhance interpretation of imagery. For the first year of observations, a suite of internally recording temperature probes will be strategically placed in the field of view of the Sidus camera. These installations aim at monitoring variations in vent community structure and dynamics (species composition and abundances, interactions within and among species) in response to changes in environmental conditions at different temporal scales. High-resolution time-series studies also provide a mean of studying population dynamics, biological rhythms, organism growth and faunal succession. In addition to programmed time-series monitoring, the NC infrastructure will also permit manual and automated modification of observational protocols in response to natural events. This will enhance our ability to document potentially critical but short-lived environmental forces affecting vent communities.
Novel health monitoring method using an RGB camera.
Hassan, M A; Malik, A S; Fofi, D; Saad, N; Meriaudeau, F
2017-11-01
In this paper we present a novel health monitoring method by estimating the heart rate and respiratory rate using an RGB camera. The heart rate and the respiratory rate are estimated from the photoplethysmography (PPG) and the respiratory motion. The method mainly operates by using the green spectrum of the RGB camera to generate a multivariate PPG signal to perform multivariate de-noising on the video signal to extract the resultant PPG signal. A periodicity based voting scheme (PVS) was used to measure the heart rate and respiratory rate from the estimated PPG signal. We evaluated our proposed method with a state of the art heart rate measuring method for two scenarios using the MAHNOB-HCI database and a self collected naturalistic environment database. The methods were furthermore evaluated for various scenarios at naturalistic environments such as a motion variance session and a skin tone variance session. Our proposed method operated robustly during the experiments and outperformed the state of the art heart rate measuring methods by compensating the effects of the naturalistic environment.
REVIEW OF DEVELOPMENTS IN SPACE REMOTE SENSING FOR MONITORING RESOURCES.
Watkins, Allen H.; Lauer, D.T.; Bailey, G.B.; Moore, D.G.; Rohde, W.G.
1984-01-01
Space remote sensing systems are compared for suitability in assessing and monitoring the Earth's renewable resources. Systems reviewed include the Landsat Thematic Mapper (TM), the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR), the French Systeme Probatoire d'Observation de la Terre (SPOT), the German Shuttle Pallet Satellite (SPAS) Modular Optoelectronic Multispectral Scanner (MOMS), the European Space Agency (ESA) Spacelab Metric Camera, the National Aeronautics and Space Administration (NASA) Large Format Camera (LFC) and Shuttle Imaging Radar (SIR-A and -B), the Russian Meteor satellite BIK-E and fragment experiments and MKF-6M and KATE-140 camera systems, the ESA Earth Resources Satellite (ERS-1), the Japanese Marine Observation Satellite (MOS-1) and Earth Resources Satellite (JERS-1), the Canadian Radarsat, the Indian Resources Satellite (IRS), and systems proposed or planned by China, Brazil, Indonesia, and others. Also reviewed are the concepts for a 6-channel Shuttle Imaging Spectroradiometer, a 128-channel Shuttle Imaging Spectrometer Experiment (SISEX), and the U. S. Mapsat.
CCD Camera Lens Interface for Real-Time Theodolite Alignment
NASA Technical Reports Server (NTRS)
Wake, Shane; Scott, V. Stanley, III
2012-01-01
Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.
Automatic portion estimation and visual refinement in mobile dietary assessment
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2011-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These “portion volumes” utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach. PMID:22242198
Automatic portion estimation and visual refinement in mobile dietary assessment
NASA Astrophysics Data System (ADS)
Woo, Insoo; Otsmo, Karl; Kim, SungYe; Ebert, David S.; Delp, Edward J.; Boushey, Carol J.
2010-01-01
As concern for obesity grows, the need for automated and accurate methods to monitor nutrient intake becomes essential as dietary intake provides a valuable basis for managing dietary imbalance. Moreover, as mobile devices with built-in cameras have become ubiquitous, one potential means of monitoring dietary intake is photographing meals using mobile devices and having an automatic estimate of the nutrient contents returned. One of the challenging problems of the image-based dietary assessment is the accurate estimation of food portion size from a photograph taken with a mobile digital camera. In this work, we describe a method to automatically calculate portion size of a variety of foods through volume estimation using an image. These "portion volumes" utilize camera parameter estimation and model reconstruction to determine the volume of food items, from which nutritional content is then extrapolated. In this paper, we describe our initial results of accuracy evaluation using real and simulated meal images and demonstrate the potential of our approach.
Beach Observations using Quadcopter Imagery
NASA Astrophysics Data System (ADS)
Yang, Yi-Chung; Wang, Hsing-Yu; Fang, Hui-Ming; Hsiao, Sung-Shan; Tsai, Cheng-Han
2017-04-01
Beaches are the places where the interaction of the land and sea takes place, and it is under the influence of many environmental factors, including meteorological and oceanic ones. To understand the evolution or changes of beaches, it may require constant monitoring. One way to monitor the beach changes is to use optical cameras. With careful placements of ground control points, land-based optical cameras, which are inexpensive compared to other remote sensing apparatuses, can be used to survey a relatively large area in a short time. For example, we have used terrestrial optical cameras incorporated with ground control points to monitor beaches. The images from the cameras were calibrated by applying the direct linear transformation, projective transformation, and Sobel edge detector to locate the shoreline. The terrestrial optical cameras can record the beach images continuous, and the shorelines can be satisfactorily identified. However, the terrestrial cameras have some limitations. First, the camera system set a sufficiently high level so that the camera can cover the whole area that is of interest; such a location may not be available. The second limitation is that objects in the image have a different resolution, depending on the distance of objects from the cameras. To overcome these limitations, the present study tested a quadcopter equipped with a down-looking camera to record video and still images of a beach. The quadcopter can be controlled to hover at one location. However, the hovering of the quadcopter can be affected by the wind, since it is not positively anchored to a structure. Although the quadcopter has a gimbal mechanism to damp out tiny shakings of the copter, it will not completely counter movements due to the wind. In our preliminary tests, we have flown the quadcopter up to 500 m high to record 10-minnte video. We then took a 10-minute average of the video data. The averaged image of the coast was blurred because of the time duration of the video and the small movement caused by the quadcopter trying to return to its original position, which is caused by the wind. To solve this problem, the feature detection technique of Speeded Up Robust Features (SURF) method was used on the image of the video, and the resulting image was much sharper than that original image. Next, we extracted the maximum and minimum of RGB value of each pixel, respectively, of the 10-minutes videos. The beach breaker zone showed up in the maximum RGB image as white color areas. Moreover, we were also able to remove the breaker from the images and see the breaker zone bottom features using minimum RGB value of the images. From this test, we also identified the location of the coastline. It was found that the correlation coefficient between the coastline identified by the copter image and that by the ground survey was as high as 0.98. By repeating this copter flight at different times, we could measure the evolution of the coastline.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Lin; Kien Ng, Sook; Zhang, Ying
Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion.more » The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC studies demonstrated sub-millimeter accuracy of 3D motion movement monitoring.« less
Infrared Thermography As Quality Control For Foamed In-Place Insulation
NASA Astrophysics Data System (ADS)
Schwartz, Joel A.
1989-03-01
Since November of 1985, FOAM-TECH, INC. has been utilizing an I.S.I. Model 91 Videotherm Camera to quality control the installation of foamed in-place polyurethane and polyisocyanurate insulation. Monitoring the injection of foam into the walls and roofs of new construction and during the the retrofitting of older buildings has become an integral and routine step in daily operations. The Videotherm is also used to monitor the injection of foam into hot water tanks, trailer bodies for refrigeration trucks, and pontoons and buoys for flotation. The camera is also used for the detection of heat loss and air infiltration for conventionally insulated buildings. Appendix A are thermograms of foamed in-place insulation.
Voice control of the space shuttle video system
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Dotson, R. S.; Brown, J. W.; Lewis, J. L.
1981-01-01
A pilot voice control system developed at the Jet Propulsion Laboratory (JPL) to test and evaluate the feasibility of controlling the shuttle TV cameras and monitors by voice commands utilizes a commercially available discrete word speech recognizer which can be trained to the individual utterances of each operator. Successful ground tests were conducted using a simulated full-scale space shuttle manipulator. The test configuration involved the berthing, maneuvering and deploying a simulated science payload in the shuttle bay. The handling task typically required 15 to 20 minutes and 60 to 80 commands to 4 TV cameras and 2 TV monitors. The best test runs show 96 to 100 percent voice recognition accuracy.
NASA Technical Reports Server (NTRS)
Philpott, D. E.; Harrison, G.; Turnbill, C.; Bailey, P. F.
1979-01-01
Research on retinal circulation during space flight required the development of a simple technique to provide self monitoring of blood vessel changes in the fundus without the use of mydriatics. A Kowa RC-2 fundus camera was modified for self-photography by the use of a bite plate for positioning and cross hairs for focusing the subject's retina relative to the film plane. Dilation of the pupils without the use of mydriatics was accomplished by dark-adaption of the subject. Pictures were obtained without pupil constriction by the use of a high speed strobe light. This method also has applications for clinical medicine.
The VMC Survey. XIX. Classical Cepheids in the Small Magellanic Cloud
NASA Astrophysics Data System (ADS)
Ripepi, V.; Marconi, M.; Moretti, M. I.; Clementini, G.; Cioni, M.-R. L.; de Grijs, R.; Emerson, J. P.; Groenewegen, M. A. T.; Ivanov, V. D.; Piatti, A. E.
2016-06-01
The “VISTA near-infrared YJK s survey of the Magellanic Clouds System” (VMC) is collecting deep K s-band time-series photometry of pulsating variable stars hosted by the two Magellanic Clouds and their connecting Bridge. In this paper, we present Y, J, K s light curves for a sample of 4172 Small Magellanic Cloud (SMC) Classical Cepheids (CCs). These data, complemented with literature V values, allowed us to construct a variety of period-luminosity (PL), period-luminosity-color (PLC), and period-Wesenheit (PW) relationships, which are valid for Fundamental (F), First Overtone (FO), and Second Overtone (SO) pulsators. The relations involving the V, J, K s bands are in agreement with their counterparts in the literature. As for the Y band, to our knowledge, we present the first CC PL, PW, and PLC relations ever derived using this filter. We also present the first near-infrared PL, PW, and PLC relations for SO pulsators to date. We used PW(V, K s) to estimate the relative SMC-LMC distance and, in turn, the absolute distance to the SMC. For the former quantity, we find a value of Δμ = 0.55 ± 0.04 mag, which is in rather good agreement with other evaluations based on CCs, but significantly larger than the results obtained from older population II distance indicators. This discrepancy might be due to the different geometric distributions of young and old tracers in both Clouds. As for the absolute distance to the SMC, our best estimates are μ SMC = 19.01 ± 0.05 mag and μ SMC = 19.04 ± 0.06 mag, based on two distance measurements to the LMC which rely on accurate CC and eclipsing Cepheid binary data, respectively.
Design Considerations for Attitude State Awareness and Prevention of Entry into Unusual Attitudes
NASA Technical Reports Server (NTRS)
Ellis, Kyle K. E.; Prinzel, Lawrence J., III; Arthur, Jarvis J.; Nicholas, Stephanie N.; Kiggins, Daniel; Verstynen, Harry; Hubbs, Clay; Wilkerson, James
2017-01-01
Loss of control - inflight (LOC-I) has historically represented the largest category of commercial aviation fatal accidents. A review of the worldwide transport airplane accidents (2001-2010) evinced that loss of attitude or energy state awareness was responsible for a large majority of the LOC-I events. A Commercial Aviation Safety Team (CAST) study of 18 worldwide loss-of-control accidents and incidents determined that flight crew loss of attitude awareness or energy state awareness due to lack of external visual reference cues was a significant causal factor in 17 of the 18 reviewed flights. CAST recommended that "Virtual Day-Visual Meteorological Condition" (Virtual Day-VMC) displays be developed to provide the visual cues necessary to prevent loss-of-control resulting from flight crew spatial disorientation and loss of energy state awareness. Synthetic vision or equivalent systems (SVS) were identified for a design "safety enhancement" (SE-200). Part of this SE involves the conduct of research for developing minimum aviation system performance standards (MASPS) for these flight deck display technologies to aid flight crew attitude and energy state awareness similar to that of a virtual day-VMC-like environment. This paper will describe a novel experimental approach to evaluating a flight crew's ability to maintain attitude awareness and to prevent entry into unusual attitudes across several SVS optical flow design considerations. Flight crews were subjected to compound-event scenarios designed to elicit channelized attention and startle/surprise within the crew. These high-fidelity scenarios, designed from real-world events, enable evaluation of the efficacy of SVS at improving flight crew attitude awareness to reduce the occurrence of LOC-I incidents in commercial flight operations.
Fongaro, Gislaine; García-González, María C.; Hernández, Marta; Kunz, Airton; Barardi, Célia R. M.; Rodríguez-Lázaro, David
2017-01-01
Enteric pathogens from biofertilizer can accumulate in the soil, subsequently contaminating water and crops. We evaluated the survival, percolation and leaching of model enteric pathogens in clay and sandy soils after biofertilization with swine digestate: PhiX-174, mengovirus (vMC0), Salmonella enterica Typhimurium and Escherichia coli O157:H7 were used as biomarkers. The survival of vMC0 and PhiX-174 in clay soil was significantly lower than in sandy soil (iT90 values of 10.520 ± 0.600 vs. 21.270 ± 1.100 and 12.040 ± 0.010 vs. 43.470 ± 1.300, respectively) and PhiX-174 showed faster percolation and leaching in sandy soil than clay soil (iT90 values of 0.46 and 2.43, respectively). S. enterica Typhimurium was percolated and inactivated more slowly than E. coli O157:H7 (iT90 values of 9.340 ± 0.200 vs. 6.620 ± 0.500 and 11.900 ± 0.900 vs. 10.750 ± 0.900 in clay and sandy soils, respectively), such that E. coli O157:H7 was transferred more quickly to the deeper layers of both soils evaluated (percolation). Our findings suggest that E. coli O157:H7 may serve as a useful microbial biomarker of depth contamination and leaching in clay and sandy soil and that bacteriophage could be used as an indicator of enteric pathogen persistence. Our study contributes to development of predictive models for enteric pathogen behavior in soils, and for potential water and food contamination associated with biofertilization, useful for risk management and mitigation in swine digestate recycling. PMID:28197137
The effect of amblyopia on fine motor skills in children.
Webber, Ann L; Wood, Joanne M; Gole, Glen A; Brown, Brian
2008-02-01
In an investigation of the functional impact of amblyopia in children, the fine motor skills of amblyopes and age-matched control subjects were compared. The influence of visual factors that might predict any decrement in fine motor skills was also explored. Vision and fine motor skills were tested in a group of children (n = 82; mean age, 8.2 +/- 1.7 [SD] years) with amblyopia of different causes (infantile esotropia, n = 17; acquired strabismus, n = 28; anisometropia, n = 15; mixed, n = 13; and deprivation n = 9), and age-matched control children (n = 37; age 8.3 +/- 1.3 years). Visual motor control (VMC) and upper limb speed and dexterity (ULSD) items of the Bruininks-Oseretsky Test of Motor Proficiency were assessed, and logMAR visual acuity (VA) and Randot stereopsis were measured. Multiple regression models were used to identify the visual determinants of fine motor skills performance. Amblyopes performed significantly poorer than control subjects on 9 of 16 fine motor skills subitems and for the overall age-standardized scores for both VMC and ULSD items (P < 0.05). The effects were most evident on timed tasks. The etiology of amblyopia and level of binocular function significantly affected fine motor skill performance on both items; however, when examined in a multiple regression model that took into account the intercorrelation between visual characteristics, poorer fine motor skills performance was associated with strabismus (F(1,75) = 5.428; P = 0.022), but not with the level of binocular function, refractive error, or visual acuity in either eye. Fine motor skills were reduced in children with amblyopia, particularly those with strabismus, compared with control subjects. The deficits in motor performance were greatest on manual dexterity tasks requiring speed and accuracy.
Wan, Fangfang; Yan, Kepeng; Xu, Dan; Qian, Qian; Liu, Hui; Li, Min; Xu, Wei
2017-01-01
Viral myocarditis (VMC) is an inflammation of the myocardium closely associated with Coxsackievirus B3 (CVB3) infection. Vγ1 + γδT cells, one of early cardiac infiltrated innate population, were reported to protect CVB3 myocarditis while the precise mechanism not fully addressed. To explore cytokine profiles and kinetics of Vγ1 + γδT and mechanism of protection against VMC, flow cytometry was conducted on cardiac Vγ1 cells in C57BL/6 mice following CVB3 infection. The level of cardiac inflammation, transthoracic echocardiography and viral replication were evaluated after monoclonal antibody depletion of Vγ1γδT. We found that Vγ1 + γδT cells infiltration peaked in the heart at day3 post CVB3 infection and constituted a minor source of IFN-γ but major producers for early IL-4. Vγ1γδT cells were activated earlier holding a higher IL-4-producing efficiency than CD4 + Th cells in the heart. Depletion of Vγ1 + γδT resulted in a significantly exacerbated cardiac infiltration, increased T, macrophage and neutrophil population in heart homogenates and worse cardiomyopathy; which was accompanied by a significant expansion of peripheral IFNγ + CD4+ and CD8+T cells. Neutralization of IL-4 in mice resulted in an exacerbated acute myocarditis confirming the IL-4-mediated protective mechanism of Vγ1. Our findings identify a unique property of Vγ1 + γδT cells as one dominant early producers of IL-4 upon CVB3 acute infection which is a key mediator to protect mice against acute myocarditis by modulating IFNγ-secreting T response. Copyright © 2016 Elsevier Ltd. All rights reserved.
The VMC survey - XXV. The 3D structure of the Small Magellanic Cloud from Classical Cepheids
NASA Astrophysics Data System (ADS)
Ripepi, Vincenzo; Cioni, Maria-Rosa L.; Moretti, Maria Ida; Marconi, Marcella; Bekki, Kenji; Clementini, Gisella; de Grijs, Richard; Emerson, Jim; Groenewegen, Martin A. T.; Ivanov, Valentin D.; Molinaro, Roberto; Muraveva, Tatiana; Oliveira, Joana M.; Piatti, Andrés E.; Subramanian, Smitha; van Loon, Jacco Th.
2017-11-01
The VISTA near-infrared YJKs survey of the Magellanic System (VMC) is collecting deep Ks-band time-series photometry of pulsating stars hosted by the two Magellanic Clouds and their connecting bridge. Here, we present Y, J, Ks light curves for a sample of 717 Small Magellanic Cloud (SMC) Classical Cepheids (CCs). These data, complemented with our previous results and V magnitude from literature, allowed us to construct a variety of period-luminosity and period-Wesenheit relationships, valid for Fundamental, First and Second Overtone pulsators. These relations provide accurate individual distances to CCs in the SMC over an area of more than 40 deg2. Adopting literature relations, we estimated ages and metallicities for the majority of the investigated pulsators, finding that (i) the age distribution is bimodal, with two peaks at 120 ± 10 and 220 ± 10 Myr; (i) the more metal-rich CCs appear to be located closer to the centre of the galaxy. Our results show that the three-dimensional distribution of the CCs in the SMC is not planar but heavily elongated for more than 25-30 kpc approximately in the east/north-east towards south-west direction. The young and old CCs in the SMC show a different geometric distribution. Our data support the current theoretical scenario predicting a close encounter or a direct collision between the Clouds some 200 Myr ago and confirm the presence of a Counter-Bridge predicted by some models. The high-precision three-dimensional distribution of young stars presented in this paper provides a new test bed for future models exploring the formation and evolution of the Magellanic System.
NASA Astrophysics Data System (ADS)
Hueso, Ricardo; Garate-Lopez, I.; Peralta, J.; Bandos, T.; Sánchez-Lavega, A.
2013-10-01
After more than 6 years orbiting Venus the Venus Express mission has provided the largest database of observations of Venus atmosphere at different cloud layers with the combination of VMC and VIRTIS instruments. We present measurements of cloud motions in the South hemisphere of Venus analyzing images from the VIRTIS-M visible channel at different wavelengths sensitive to the upper cloud haze at 65-70 km height (dayside ultraviolet images) and the middle cloud deck (dayside visible and near infrared images around 1 μm) about 5-8 km deeper in the atmosphere. We combine VIRTIS images in nearby wavelengths to increase the contrast of atmospheric details and measurements were obtained with a semi-automatic cloud correlation algorithm. Both cloud layers are studied simultaneously to infer similarities and differences in these vertical levels in terms of cloud morphologies and winds. For both levels we present global mean zonal and meridional winds, latitudinal distribution of winds with local time and the wind shear between both altitudes. The upper branch of the Hadley cell circulation is well resolved in UV images with an acceleration of the meridional circulation at mid-latitudes with increasing local time peaking at 14-16h. This organized meridional circulation is almost absent in NIR images. Long-term variability of zonal winds is also found in UV images with increasing winds over time during the VEX mission. This is in agreement with current analysis of VMC images (Kathuntsev et al. 2013). The possible long-term acceleration of zonal winds is also examined for NIR images. References Khatuntsev et al. Icarus 226, 140-158 (2013)
Feasibility of an endotracheal tube-mounted camera for percutaneous dilatational tracheostomy.
Grensemann, J; Eichler, L; Hopf, S; Jarczak, D; Simon, M; Kluge, S
2017-07-01
Percutaneous dilatational tracheostomy (PDT) in critically ill patients is often led by optical guidance with a bronchoscope. This is not without its disadvantages. Therefore, we aimed to study the feasibility of a recently introduced endotracheal tube-mounted camera (VivaSight™-SL, ET View, Misgav, Israel) in the guidance of PDT. We studied 10 critically ill patients who received PDT with a VivaSight-SL tube that was inserted prior to tracheostomy for optical guidance. Visualization of the tracheal structures (i.e., identification and monitoring of the thyroid, cricoid, and tracheal cartilage and the posterior wall) and the quality of ventilation (before puncture and during the tracheostomy) were rated on four-point Likert scales. Respiratory variables were recorded, and blood gases were sampled before the interventions, before the puncture and before the insertion of the tracheal cannula. Visualization of the tracheal landmarks was rated as 'very good' or 'good' in all but one case. Monitoring during the puncture and dilatation was also rated as 'very good' or 'good' in all but one. In the cases that were rated 'difficult', the visualization and monitoring of the posterior wall of the trachea were the main concerns. No changes in the respiratory variables or blood gases occurred between the puncture and the insertion of the tracheal cannula. Percutaneous dilatational tracheostomy with optical guidance from a tube-mounted camera is feasible. Further studies comparing the camera tube with bronchoscopy as the standard approach should be performed. © 2017 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
McKinley, John B.; Pierson, Roger; Ertem, M. C.; Krone, Norris J., Jr.; Cramer, James A.
2008-04-01
Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation (NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both a precision landing aid, as well as an integrity monitor for the GPS and SVS database.
Visual Sensing for Urban Flood Monitoring
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201
Constrained optimization for position calibration of an NMR field camera.
Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke
2018-07-01
Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
QWIP technology for both military and civilian applications
NASA Astrophysics Data System (ADS)
Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.
2001-10-01
Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
A traffic situation analysis system
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin
2011-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.
Adams, Noah S.; Smith, Collin; Plumb, John M.; Hansen, Gabriel S.; Beeman, John W.
2015-07-06
This report describes the initial year of a 2-year study to determine the feasibility of using acoustic cameras to monitor fish movements to help inform decisions about fish passage at Cougar Dam near Springfield, Oregon. Specifically, we used acoustic cameras to measure fish presence, travel speed, and direction adjacent to the water temperature control tower in the forebay of Cougar Dam during the spring (May, June, and July) and fall (September, October, and November) of 2013. Cougar Dam is a high-head flood-control dam, and the water temperature control tower enables depth-specific water withdrawals to facilitate adjustment of water temperatures released downstream of the dam. The acoustic cameras were positioned at the upstream entrance of the tower to monitor free-ranging subyearling and yearling-size juvenile Chinook salmon (Oncorhynchus tshawytscha). Because of the large size discrepancy, we could distinguish juvenile Chinook salmon from their predators, which enabled us to measure predators and prey in areas adjacent to the entrance of the tower. We used linear models to quantify and assess operational and environmental factors—such as time of day, discharge, and water temperature—that may influence juvenile Chinook salmon movements within the beam of the acoustic cameras. Although extensive milling behavior of fish near the structure may have masked directed movement of fish and added unpredictability to fish movement models, the acoustic-camera technology enabled us to ascertain the general behavior of discrete size classes of fish. Fish travel speed, direction of travel, and counts of fish moving toward the water temperature control tower primarily were influenced by the amount of water being discharged through the dam.
Rovero, Francesco; Martin, Emanuel; Rosa, Melissa; Ahumada, Jorge A.; Spitale, Daniel
2014-01-01
Medium-to-large mammals within tropical forests represent a rich and functionally diversified component of this biome; however, they continue to be threatened by hunting and habitat loss. Assessing these communities implies studying species’ richness and composition, and determining a state variable of species abundance in order to infer changes in species distribution and habitat associations. The Tropical Ecology, Assessment and Monitoring (TEAM) network fills a chronic gap in standardized data collection by implementing a systematic monitoring framework of biodiversity, including mammal communities, across several sites. In this study, we used TEAM camera trap data collected in the Udzungwa Mountains of Tanzania, an area of exceptional importance for mammal diversity, to propose an example of a baseline assessment of species’ occupancy. We used 60 camera trap locations and cumulated 1,818 camera days in 2009. Sampling yielded 10,647 images of 26 species of mammals. We estimated that a minimum of 32 species are in fact present, matching available knowledge from other sources. Estimated species richness at camera sites did not vary with a suite of habitat covariates derived from remote sensing, however the detection probability varied with functional guilds, with herbivores being more detectable than other guilds. Species-specific occupancy modelling revealed novel ecological knowledge for the 11 most detected species, highlighting patterns such as ‘montane forest dwellers’, e.g. the endemic Sanje mangabey (Cercocebus sanjei), and ‘lowland forest dwellers’, e.g. suni antelope (Neotragus moschatus). Our results show that the analysis of camera trap data with account for imperfect detection can provide a solid ecological assessment of mammal communities that can be systematically replicated across sites. PMID:25054806
Volcano monitoring with an infrared camera: first insights from Villarrica Volcano
NASA Astrophysics Data System (ADS)
Rosas Sotomayor, Florencia; Amigo Ramos, Alvaro; Velasquez Vargas, Gabriela; Medina, Roxana; Thomas, Helen; Prata, Fred; Geoffroy, Carolina
2015-04-01
This contribution focuses on the first trials of the, almost 24/7 monitoring of Villarrica volcano with an infrared camera. Results must be compared with other SO2 remote sensing instruments such as DOAS and UV-camera, for the ''day'' measurements. Infrared remote sensing of volcanic emissions is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult, during volcanic crisis and at night time. In recent years, a ground-based infrared camera (Nicair) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. Three Nicair1 (first model) have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. Several trials with the instruments have been performed in northern Chilean volcanoes, and have proven that the intervals of retrieved SO2 concentration and fluxes are as expected. Measurements were also performed at Villarrica volcano, and a location to install a ''fixed'' camera, at 8km from the crater, was discovered here. It is a coffee house with electrical power, wifi network, polite and committed owners and a full view of the volcano summit. The first measurements are being made and processed in order to have full day and week of SO2 emissions, analyze data transfer and storage, improve the remote control of the instrument and notebook in case of breakdown, web-cam/GoPro support, and the goal of the project: which is to implement a fixed station to monitor and study the Villarrica volcano with a Nicair1 integrating and comparing these results with other remote sensing instruments. This works also looks upon the strengthen of bonds with the community by developing teaching material and giving talks to communicate volcanic hazards and other geoscience topics to the people who live "just around the corner" from one of the most active volcanoes in Chile.
NASA Astrophysics Data System (ADS)
Kimm, H.; Guan, K.; Luo, Y.; Peng, J.; Mascaro, J.; Peng, B.
2017-12-01
Monitoring crop growth conditions is of primary interest to crop yield forecasting, food production assessment, and risk management of individual farmers and agribusiness. Despite its importance, there are limited access to field level crop growth/condition information in the public domain. This scarcity of ground truth data also hampers the use of satellite remote sensing for crop monitoring due to the lack of validation. Here, we introduce a new camera network (CropInsight) to monitor crop phenology, growth, and conditions that are designed for the US Corn Belt landscape. Specifically, this network currently includes 40 sites (20 corn and 20 soybean fields) across southern half of the Champaign County, IL ( 800 km2). Its wide distribution and automatic operation enable the network to capture spatiotemporal variations of crop growth condition continuously at the regional scale. At each site, low-maintenance, and high-resolution RGB digital cameras are set up having a downward view from 4.5 m height to take continuous images. In this study, we will use these images and novel satellite data to construct daily LAI map of the Champaign County at 30 m spatial resolution. First, we will estimate LAI from the camera images and evaluate it using the LAI data collected from LAI-2200 (LI-COR, Lincoln, NE). Second, we will develop relationships between the camera-based LAI estimation and vegetation indices derived from a newly developed MODIS-Landsat fusion product (daily, 30 m resolution, RGB + NIR + SWIR bands) and the Planet Lab's high-resolution satellite data (daily, 5 meter, RGB). Finally, we will scale up the above relationships to generate high spatiotemporal resolution crop LAI map for the whole Champaign County. The proposed work has potentials to expand to other agro-ecosystems and to the broader US Corn Belt.
Airborne Optical and Thermal Remote Sensing for Wildfire Detection and Monitoring.
Allison, Robert S; Johnston, Joshua M; Craig, Gregory; Jennings, Sion
2016-08-18
For decades detection and monitoring of forest and other wildland fires has relied heavily on aircraft (and satellites). Technical advances and improved affordability of both sensors and sensor platforms promise to revolutionize the way aircraft detect, monitor and help suppress wildfires. Sensor systems like hyperspectral cameras, image intensifiers and thermal cameras that have previously been limited in use due to cost or technology considerations are now becoming widely available and affordable. Similarly, new airborne sensor platforms, particularly small, unmanned aircraft or drones, are enabling new applications for airborne fire sensing. In this review we outline the state of the art in direct, semi-automated and automated fire detection from both manned and unmanned aerial platforms. We discuss the operational constraints and opportunities provided by these sensor systems including a discussion of the objective evaluation of these systems in a realistic context.
Development of a CCTV system for welder training and monitoring of Space Shuttle Main Engine welds
NASA Technical Reports Server (NTRS)
Gordon, S. S.; Flanigan, L. A.; Dyer, G. E.
1987-01-01
A Weld Operator's Remote Monitoring System (WORMS) for remote viewing of manual and automatic GTA welds has been developed for use in Space Shuttle Main Engine (SSME) manufacturing. This system utilizes fiberoptics to transmit images from a receiving lens to a small closed-circuit television (CCTV) camera. The camera converts the image to an electronic signal, which is sent to a videotape recorder (VTR) and a monitor. The overall intent of this system is to provide a clearer, more detailed view of welds than is available by direct observation. This system has six primary areas of application: (1) welder training; (2) viewing of joint penetration; (3) viewing visually inaccessible welds; (4) quality control and quality assurance; (5) remote joint tracking and adjustment of variables in machine welds; and (6) welding research and development. This paper describes WORMS and how it applies to each application listed.
HIGH-ENERGY X-RAY PINHOLE CAMERA FOR HIGH-RESOLUTION ELECTRON BEAM SIZE MEASUREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, B.; Morgan, J.; Lee, S.H.
The Advanced Photon Source (APS) is developing a multi-bend achromat (MBA) lattice based storage ring as the next major upgrade, featuring a 20-fold reduction in emittance. Combining the reduction of beta functions, the electron beam sizes at bend magnet sources may be reduced to reach 5 – 10 µm for 10% vertical coupling. The x-ray pinhole camera currently used for beam size monitoring will not be adequate for the new task. By increasing the operating photon energy to 120 – 200 keV, the pinhole camera’s resolution is expected to reach below 4 µm. The peak height of the pinhole imagemore » will be used to monitor relative changes of the beam sizes and enable the feedback control of the emittance. We present the simulation and the design of a beam size monitor for the APS storage ring.« less
Williams, Gary E.; Wood, P.B.
2002-01-01
We used miniature infrared video cameras to monitor Wood Thrush (Hylocichla mustelina) nests during 1998–2000. We documented nest predators and examined whether evidence at nests can be used to predict predator identities and nest fates. Fifty-six nests were monitored; 26 failed, with 3 abandoned and 23 depredated. We predicted predator class (avian, mammalian, snake) prior to review of video footage and were incorrect 57% of the time. Birds and mammals were underrepresented whereas snakes were over-represented in our predictions. We documented ≥9 nest-predator species, with the southern flying squirrel (Glaucomys volans) taking the most nests (n = 8). During 2000, we predicted fate (fledge or fail) of 27 nests; 23 were classified correctly. Traditional methods of monitoring nests appear to be effective for classifying success or failure of nests, but ineffective at classifying nest predators.
Airborne Optical and Thermal Remote Sensing for Wildfire Detection and Monitoring
Allison, Robert S.; Johnston, Joshua M.; Craig, Gregory; Jennings, Sion
2016-01-01
For decades detection and monitoring of forest and other wildland fires has relied heavily on aircraft (and satellites). Technical advances and improved affordability of both sensors and sensor platforms promise to revolutionize the way aircraft detect, monitor and help suppress wildfires. Sensor systems like hyperspectral cameras, image intensifiers and thermal cameras that have previously been limited in use due to cost or technology considerations are now becoming widely available and affordable. Similarly, new airborne sensor platforms, particularly small, unmanned aircraft or drones, are enabling new applications for airborne fire sensing. In this review we outline the state of the art in direct, semi-automated and automated fire detection from both manned and unmanned aerial platforms. We discuss the operational constraints and opportunities provided by these sensor systems including a discussion of the objective evaluation of these systems in a realistic context. PMID:27548174
Shah, Rachit D; Cao, Alex; Golenberg, Lavie; Ellis, R Darin; Auner, Gregory W; Pandya, Abhilash K; Klein, Michael D
2009-04-01
Technical advances in the application of laparoscopic and robotic surgical systems have improved platform usability. The authors hypothesized that using two monitors instead of one would lead to faster performance with fewer errors. All tasks were performed using a surgical robot in a training box. One of the monitors was a standard camera with two preset zoom levels (zoomed in and zoomed out, single-monitor condition). The second monitor provided a static panoramic view of the whole surgical field. The standard camera was static at the zoomed-in level for the dual-monitor condition of the study. The study had two groups of participants: 4 surgeons proficient in both robotic and advanced laparoscopic skills and 10 lay persons (nonsurgeons) who were given adequate time to train and familiarize themselves with the equipment. Running a 50-cm rope was the basic task. Advanced tasks included running a suture through predetermined points and intracorporeal knot tying with 3-0 silk. Trial completion times and errors, categorized into three groups (orientation, precision, and task), were recorded. The trial completion times for all the tasks, basic and advanced, in the two groups were not significantly different. Fewer orientation errors occurred in the nonsurgeon group during knot tying (p=0.03) and in both groups during suturing (p=0.0002) in the dual-monitor arm of the study. Differences in precision and task error were not significant. Using two camera views helps both surgeons and lay persons perform complex tasks with fewer errors. These results may be due to better awareness of the surgical field with regard to the location of the instruments, leading to better field orientation. This display setup has potential for use in complex minimally invasive surgeries such as esophagectomy and gastric bypass. This technique also would be applicable to open microsurgery.
Solid state television camera (CCD-buried channel)
NASA Technical Reports Server (NTRS)
1976-01-01
The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.
Solid state television camera (CCD-buried channel), revision 1
NASA Technical Reports Server (NTRS)
1977-01-01
An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.
Solid state, CCD-buried channel, television camera study and design
NASA Technical Reports Server (NTRS)
Hoagland, K. A.; Balopole, H.
1976-01-01
An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.
Real-time PM10 concentration monitoring on Penang Bridge by using traffic monitoring CCTV
NASA Astrophysics Data System (ADS)
Low, K. L.; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Wong, C. J.
2007-04-01
For this study, an algorithm was developed to determine concentration of particles less than 10μm (PM10) from still images captured by a CCTV camera on the Penang Bridge. The objective of this study is to remotely monitor the PM10 concentrations on the Penang Bridge through the internet. So, an algorithm was developed based on the relationship between the atmospheric reflectance and the corresponding air quality. By doing this, the still images were separated into three bands namely red, green and blue and their digital number values were determined. A special transformation was then performed to the data. Ground PM10 measurements were taken by using DustTrak TM meter. The algorithm was calibrated using a regression analysis. The proposed algorithm produced a high correlation coefficient (R) and low root-mean-square error (RMS) between the measured and produced PM10. Later, a program was written by using Microsoft Visual Basic 6.0 to download still images from the camera over the internet and implement the newly developed algorithm. Meanwhile, the program is running in real time and the public will know the air pollution index from time to time. This indicates that the technique using the CCTV camera images can provide a useful tool for air quality studies.
Evaluation of Eye Metrics as a Detector of Fatigue
2010-03-01
eyeglass frames . The cameras are angled upward toward the eyes and extract real-time pupil diameter, eye-lid movement, and eye-ball movement. The...because the cameras were mounted on eyeglass -like frames , the system was able to continuously monitor the eye throughout all sessions. Overall, the...of “ fitness for duty” testing and “real-time monitoring” of operator performance has been slow (Institute of Medicine, 2004). Oculometric-based
Heumann, Frederick K.; Wilkinson, Jay C.; Wooding, David R.
1997-01-01
A remote appliance for supporting a tool for performing work at a worksite on a substantially circular bore of a workpiece and for providing video signals of the worksite to a remote monitor comprising: a baseplate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the baseplate and positioned to roll against the bore of the workpiece when the baseplate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the baseplate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the baseplate such that the working end of the tool is positioned on the inner face side of the baseplate; a camera for providing video signals of the worksite to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the baseplate, the camera holding means being adjustably attached to the outer face of the baseplate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris.
Astronauts Garriott and Merbold monitoring experiemnts in Spacelab
1983-11-28
STS009-123-340 (28 Nov 1983) --- Astronaut Owen K. Garriott, STS-9 mission specialist, left, and Ulf Merbold, payload specialist, take a break from monitoring experimentation aboard Spacelab to be photographed. Dr. Garriott, holds in his left hand a data/log book for the solar spectrum experiment. Dr. Merbold, holds a map in his left hand for the monitoring of ground objectives of the metric camera.
Sharp-Tailed Grouse Nest Survival and Nest Predator Habitat Use in North Dakota's Bakken Oil Field.
Burr, Paul C; Robinson, Aaron C; Larsen, Randy T; Newman, Robert A; Ellis-Felege, Susan N
2017-01-01
Recent advancements in extraction technologies have resulted in rapid increases of gas and oil development across the United States and specifically in western North Dakota. This expansion of energy development has unknown influences on local wildlife populations and the ecological interactions within and among species. Our objectives for this study were to evaluate nest success and nest predator dynamics of sharp-tailed grouse (Tympanuchus phasianellus) in two study sites that represented areas of high and low energy development intensities in North Dakota. During the summers of 2012 and 2013, we monitored 163 grouse nests using radio telemetry. Of these, 90 nests also were monitored using miniature cameras to accurately determine nest fates and identify nest predators. We simultaneously conducted predator surveys using camera scent stations and occupancy modeling to estimate nest predator occurrence at each site. American badgers (Taxidea taxus) and striped skunks (Mephitis mephitis) were the primary nest predators, accounting for 56.7% of all video recorded nest depredations. Nests in our high intensity gas and oil area were 1.95 times more likely to succeed compared to our minimal intensity area. Camera monitored nests were 2.03 times more likely to succeed than non-camera monitored nests. Occupancy of mammalian nest predators was 6.9 times more likely in our study area of minimal gas and oil intensity compared to the high intensity area. Although only a correlative study, our results suggest energy development may alter the predator community, thereby increasing nest success for sharp-tailed grouse in areas of intense development, while adjacent areas may have increased predator occurrence and reduced nest success. Our study illustrates the potential influences of energy development on the nest predator-prey dynamics of sharp-tailed grouse in western North Dakota and the complexity of evaluating such impacts on wildlife.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.
2017-01-01
Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888
Handheld hyperspectral imager system for chemical/biological and environmental applications
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Piatek, Bob
2004-08-01
A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.
Hand-held hyperspectral imager for chemical/biological and environmental applications
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Piatek, Bob
2004-03-01
A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Low-cost printing of computerised tomography (CT) images where there is no dedicated CT camera.
Tabari, Abdulkadir M
2007-01-01
Many developing countries still rely on conventional hard copy images to transfer information among physicians. We have developed a low-cost alternative method of printing computerised tomography (CT) scan images where there is no dedicated camera. A digital camera is used to photograph images from the CT scan screen monitor. The images are then transferred to a PC via a USB port, before being printed on glossy paper using an inkjet printer. The method can be applied to other imaging modalities like ultrasound and MRI and appears worthy of emulation elsewhere in the developing world where resources and technical expertise are scarce.
Sarnadskiĭ, V N
2007-01-01
The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Coordinated Parallel Runway Approaches
NASA Technical Reports Server (NTRS)
Koczo, Steve
1996-01-01
The current air traffic environment in airport terminal areas experiences substantial delays when weather conditions deteriorate to Instrument Meteorological Conditions (IMC). Expected future increases in air traffic will put additional pressures on the National Airspace System (NAS) and will further compound the high costs associated with airport delays. To address this problem, NASA has embarked on a program to address Terminal Area Productivity (TAP). The goals of the TAP program are to provide increased efficiencies in air traffic during the approach, landing, and surface operations in low-visibility conditions. The ultimate goal is to achieve efficiencies of terminal area flight operations commensurate with Visual Meteorological Conditions (VMC) at current or improved levels of safety.
Full-parallax 3D display from stereo-hybrid 3D camera system
NASA Astrophysics Data System (ADS)
Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel
2018-04-01
In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.
A control system of a mini survey facility for photometric monitoring
NASA Astrophysics Data System (ADS)
Tsutsui, Hironori; Yanagisawa, Kenshi; Izumiura, Hideyuki; Shimizu, Yasuhiro; Hanaue, Takumi; Ita, Yoshifusa; Ichikawa, Takashi; Komiyama, Takahiro
2016-08-01
We have built a control system for a mini survey facility dedicated to photometric monitoring of nearby bright (K<5) stars in the near-infrared region. The facility comprises a 4-m-diameter rotating dome and a small (30-mm aperture) wide-field (5 × 5 sq. deg. field of view) infrared (1.0-2.5 microns) camera on an equatorial fork mount, as well as power sources and other associated equipment. All the components other than the camera are controlled by microcomputerbased I/O boards that were developed in-house and are in many of the open-use instruments in our observatory. We present the specifications and configuration of the facility hardware, as well as the structure of its control software.
Possibilities in optical monitoring of laser welding process
NASA Astrophysics Data System (ADS)
Horník, Petr; Mrňa, Libor; Pavelka, Jan
2016-11-01
Laser welding is a modern, widely used but still not really common method of welding. With increasing demands on the quality of the welds, it is usual to apply automated machine welding and with on-line monitoring of the welding process. The resulting quality of the weld is largely affected by the behavior of keyhole. However, its direct observation during the welding process is practically impossible and it is necessary to use indirect methods. At ISI we have developed optical methods of monitoring the process. Most advanced is an analysis of radiation of laser-induced plasma plume forming in the keyhole where changes in the frequency of the plasma bursts are monitored and evaluated using Fourier and autocorrelation analysis. Another solution, robust and suitable for industry, is based on the observation of the keyhole inlet opening through a coaxial camera mounted in the welding head and the subsequent image processing by computer vision methods. A high-speed camera is used to understand the dynamics of the plasma plume. Through optical spectroscopy of the plume, we can study the excitation of elements in a material. It is also beneficial to monitor the gas flow of shielding gas using schlieren method.
MS Musgrave conducts CFES experiment on middeck
1983-04-09
STS006-03-381 (4-9 April 1983) --- Astronaut F. Story Musgrave, STS-6 mission specialist, monitors the activity of a sample in the continuous flow electrophoresis system (CFES) aboard the Earth-orbiting space shuttle Challenger. Dr. Musgrave is in the middeck area of the spacecraft. He has mounted a 35mm camera to record the activity through the window of the experiment. This frame was also photographed with a 35mm camera. Photo credit: NASA
NASA Technical Reports Server (NTRS)
1999-01-01
A survey is presented of NASA-developed technologies and systems that were reaching commercial application in the course of 1999. Attention is given to the contributions of each major NASA Research Center. Representative 'spinoff' technologies include the predictive AI engine monitoring system EMPAS, the GPS-based Wide Area Augmentation System for aircraft navigation, a CMOS-Active Pixel Sensor camera-on-a-chip, a marine spectroradiometer, portable fuel cells, hyperspectral camera technology, and a rapid-prototyping process for ceramic components.
Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera
NASA Astrophysics Data System (ADS)
Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.
2017-10-01
Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.
Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
Current status of Polish Fireball Network
NASA Astrophysics Data System (ADS)
Wiśniewski, M.; Żołądek, P.; Olech, A.; Tyminski, Z.; Maciejewski, M.; Fietkiewicz, K.; Rudawska, R.; Gozdalski, M.; Gawroński, M. P.; Suchodolski, T.; Myszkiewicz, M.; Stolarz, M.; Polakowski, K.
2017-09-01
The Polish Fireball Network (PFN) is a project to monitor regularly the sky over Poland in order to detect bright fireballs. In 2016 the PFN consisted of 36 continuously active stations with 57 sensitive analogue video cameras and 7 high resolution digital cameras. In our observations we also use spectroscopic and radio techniques. A PyFN software package for trajectory and orbit determination was developed. The PFN project is an example of successful participation of amateur astronomers who can provide valuable scientific data. The network is coordinated by astronomers from Copernicus Astronomical Centre in Warsaw, Poland. In 2011-2015 the PFN cameras recorded 214,936 meteor events. Using the PFN data and the UFOOrbit software 34,609 trajectories and orbits were calculated. In the following years we are planning intensive modernization of the PFN network including installation of dozens of new digital cameras.
Techniques for identifying predators of goose nests
Anthony, R. Michael; Grand, J.B.; Fondell, T.F.; Miller, David A.
2006-01-01
We used cameras and artificial eggs to identify nest predators of dusky Canada goose Branta canadensis occidentalis nests during 1997-2000. Cameras were set up at 195 occupied goose nests and 60 artificial nests. We placed wooden eggs and domestic goose eggs that were emptied and then filled with wax or foam in an additional 263 natural goose nests to identify predators from marks in the artificial eggs. All techniques had limitations, but each correctly identified predators and estimated their relative importance. Nests with cameras had higher rates of abandonment than natural nests, especially during laying. Abandonment rates were reduced by deploying artificial eggs late in laying and reducing time at nests. Predation rates for nests with cameras were slightly lower than for nests without cameras. Wax-filled artificial eggs caused mortality of embryos in natural nests, but were better for identifying predator marks at artificial nests. Use of foam-filled artificial eggs in natural nests was the most cost effective means of monitoring nest predation. ?? Wildlife Biology (2006).
Preface: The Chang'e-3 lander and rover mission to the Moon
NASA Astrophysics Data System (ADS)
Ip, Wing-Huen; Yan, Jun; Li, Chun-Lai; Ouyang, Zi-Yuan
2014-12-01
The Chang'e-3 (CE-3) lander and rover mission to the Moon was an intermediate step in China's lunar exploration program, which will be followed by a sample return mission. The lander was equipped with a number of remote-sensing instruments including a pair of cameras (Landing Camera and Terrain Camera) for recording the landing process and surveying terrain, an extreme ultraviolet camera for monitoring activities in the Earth's plasmasphere, and a first-ever Moon-based ultraviolet telescope for astronomical observations. The Yutu rover successfully carried out close-up observations with the Panoramic Camera, mineralogical investigations with the VIS-NIR Imaging Spectrometer, study of elemental abundances with the Active Particle-induced X-ray Spectrometer, and pioneering measurements of the lunar subsurface with Lunar Penetrating Radar. This special issue provides a collection of key information on the instrumental designs, calibration methods and data processing procedures used by these experiments with a perspective of facilitating further analyses of scientific data from CE-3 in preparation for future missions.
UAV-based NDVI calculation over grassland: An alternative approach
NASA Astrophysics Data System (ADS)
Mejia-Aguilar, Abraham; Tomelleri, Enrico; Asam, Sarah; Zebisch, Marc
2016-04-01
The Normalised Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring and assessing vegetation in remote sensing. The index relies on the reflectance difference between the near infrared (NIR) and red light and is thus able to track variations of structural, phenological, and biophysical parameters for seasonal and long-term monitoring. Conventionally, NDVI is inferred from space-borne spectroradiometers, such as MODIS, with moderate resolution up to 250 m ground resolution. In recent years, a new generation of miniaturized radiometers and integrated hyperspectral sensors with high resolution became available. Such small and light instruments are particularly adequate to be mounted on airborne unmanned aerial vehicles (UAV) used for monitoring services reaching ground sampling resolution in the order of centimetres. Nevertheless, such miniaturized radiometers and hyperspectral sensors are still very expensive and require high upfront capital costs. Therefore, we propose an alternative, mainly cheaper method to calculate NDVI using a camera constellation consisting of two conventional consumer-grade cameras: (i) a Ricoh GR modified camera that acquires the NIR spectrum by removing the internal infrared filter. A mounted optical filter additionally obstructs all wavelengths below 700 nm. (ii) A Ricoh GR in RGB configuration using two optical filters for blocking wavelengths below 600 nm as well as NIR and ultraviolet (UV) light. To assess the merit of the proposed method, we carry out two comparisons: First, reflectance maps generated by the consumer-grade camera constellation are compared to reflectance maps produced with a hyperspectral camera (Rikola). All imaging data and reflectance maps are processed using the PIX4D software. In the second test, the NDVI at specific points of interest (POI) generated by the consumer-grade camera constellation is compared to NDVI values obtained by ground spectral measurements using a portable spectroradiometer (Spectravista SVC HR-1024i). All data were collected on a dry alpine mountain grassland site in the Matsch valley, Italy, during the vegetation period of 2015. Data acquisition for the first comparison followed a pre-programmed flight plan in which the hyperspectral and alternative dual-camera constellation were mounted separately on an octocopter-UAV during two consecutive flight campaigns. Ground spectral measurements collection took place on the same site and on the same dates (three in total) of the flight campaigns. The proposed technique achieves promising results and therewith constitutes a cheap and simple way of collecting spatially explicit information on vegetated areas even in challenging terrain.
77 FR 58813 - Western Pacific Fisheries; Approval of a Marine Conservation Plan for American Samoa
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-24
.... Determining genetic connectivity of coral reef ecosystems in the Samoa archipelago; 21. Surveying fish... quality of deep reef habitat through use of drop cameras; 32. Coral recruitment survey and monitoring; 33... scientific awareness of junior biologist; and 46. Monitoring of coral reefs in Independent Samoa. Objective 7...
Design and implementation of a remote UAV-based mobile health monitoring system
NASA Astrophysics Data System (ADS)
Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix
2017-04-01
Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.
Identifying predators and fates of grassland passerine nests using miniature video cameras
Pietz, Pamela J.; Granfors, Diane A.
2000-01-01
Nest fates, causes of nest failure, and identities of nest predators are difficult to determine for grassland passerines. We developed a miniature video-camera system for use in grasslands and deployed it at 69 nests of 10 passerine species in North Dakota during 1996-97. Abandonment rates were higher at nests 1 day or night (22-116 hr) at 6 nests, 5 of which were depredated by ground squirrels or mice. For nests without cameras, estimated predation rates were lower for ground nests than aboveground nests (P = 0.055), but did not differ between open and covered nests (P = 0.74). Open and covered nests differed, however, when predation risk (estimated by initial-predation rate) was examined separately for day and night using camera-monitored nests; the frequency of initial predations that occurred during the day was higher for open nests than covered nests (P = 0.015). Thus, vulnerability of some nest types may depend on the relative importance of nocturnal and diurnal predators. Predation risk increased with nestling age from 0 to 8 days (P = 0.07). Up to 15% of fates assigned to camera-monitored nests were wrong when based solely on evidence that would have been available from periodic nest visits. There was no evidence of disturbance at nearly half the depredated nests, including all 5 depredated by large mammals. Overlap in types of sign left by different predator species, and variability of sign within species, suggests that evidence at nests is unreliable for identifying predators of grassland passerines.
Polarization sensitive camera for the in vitro diagnostic and monitoring of dental erosion
NASA Astrophysics Data System (ADS)
Bossen, Anke; Rakhmatullina, Ekaterina; Lussi, Adrian; Meier, Christoph
Due to a frequent consumption of acidic food and beverages, the prevalence of dental erosion increases worldwide. In an initial erosion stage, the hard dental tissue is softened due to acidic demineralization. As erosion progresses, a gradual tissue wear occurs resulting in thinning of the enamel. Complete loss of the enamel tissue can be observed in severe clinical cases. Therefore, it is essential to provide a diagnosis tool for an accurate detection and monitoring of dental erosion already at early stages. In this manuscript, we present the development of a polarization sensitive imaging camera for the visualization and quantification of dental erosion. The system consists of two CMOS cameras mounted on two sides of a polarizing beamsplitter. A horizontal linearly polarized light source is positioned orthogonal to the camera to ensure an incidence illumination and detection angles of 45°. The specular reflected light from the enamel surface is collected with an objective lens mounted on the beam splitter and divided into horizontal (H) and vertical (V) components on each associate camera. Images of non-eroded and eroded enamel surfaces at different erosion degrees were recorded and assessed with diagnostic software. The software was designed to generate and display two types of images: distribution of the reflection intensity (V) and a polarization ratio (H-V)/(H+V) throughout the analyzed tissue area. The measurements and visualization of these two optical parameters, i.e. specular reflection intensity and the polarization ratio, allowed detection and quantification of enamel erosion at early stages in vitro.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Final Technical Report: Development of Post-Installation Monitoring Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polagye, Brian
2014-03-31
The development of approaches to harness marine and hydrokinetic energy at large-scale is predicated on the compatibility of these generation technologies with the marine environment. At present, aspects of this compatibility are uncertain. Demonstration projects provide an opportunity to address these uncertainties in a way that moves the entire industry forward. However, the monitoring capabilities to realize these advances are often under-developed in comparison to the marine and hydrokinetic energy technologies being studied. Public Utility District No. 1 of Snohomish County has proposed to deploy two 6-meter diameter tidal turbines manufactured by OpenHydro in northern Admiralty Inlet, Puget Sound, Washington.more » The goal of this deployment is to provide information about the environmental, technical, and economic performance of such turbines that can advance the development of larger-scale tidal energy projects, both in the United States and internationally. The objective of this particular project was to develop environmental monitoring plans in collaboration with resource agencies, while simultaneously advancing the capabilities of monitoring technologies to the point that they could be realistically implemented as part of these plans. In this, the District was joined by researchers at the Northwest National Marine Renewable Energy Center at the University of Washington, Sea Mammal Research Unit, LLC, H.T. Harvey & Associates, and Pacific Northwest National Laboratory. Over a two year period, the project team successfully developed four environmental monitoring and mitigation plans that were adopted as a condition of the operating license for the demonstration project that issued by the Federal Energy Regulatory Commission in March 2014. These plans address nearturbine interactions with marine animals, the sound produced by the turbines, marine mammal behavioral changes associated with the turbines, and changes to benthic habitat associated with colonization of the subsea base support structure. In support of these plans, the project team developed and field tested a strobe-illuminated stereooptical camera system suitable for studying near-turbine interactions with marine animals. The camera system underwent short-term field testing at the proposed turbine deployment site and a multi-month endurance test in shallower water to evaluate the effectiveness of biofouling mitigation measures for the optical ports on camera and strobe pressure housings. These tests demonstrated that the camera system is likely to meet the objectives of the near-turbine monitoring plan and operate, without maintenance, for periods of at least three months. The project team also advanced monitoring capabilities related to passive acoustic monitoring of marine mammals and monitoring of tidal currents. These capabilities will be integrated in a recoverable monitoring package that has a single interface point with the OpenHydro turbines, connects to shore power and data via a wet-mate connector, and can be recovered to the surface for maintenance and reconfiguration independent of the turbine. A logical next step would be to integrate these instruments within the package, such that one instrument can trigger the operation of another.« less
Emteborg, Håkan; Zeleny, Reinhard; Charoud-Got, Jean; Martos, Gustavo; Lüddeke, Jörg; Schellin, Holger; Teipel, Katharina
2014-01-01
Coupling an infrared (IR) camera to a freeze dryer for on-line monitoring of freeze-drying cycles is described for the first time. Normally, product temperature is measured using a few invasive Pt-100 probes, resulting in poor spatial resolution. To overcome this, an IR camera was placed on a process-scale freeze dryer. Imaging took place every 120 s through a Germanium window comprising 30,000 measurement points obtained contact-free from −40°C to 25°C. Results are presented for an empty system, bulk drying of cheese slurry, and drying of 1 mL human serum in 150 vials. During freezing of the empty system, differences of more than 5°C were measured on the shelf. Adding a tray to the empty system, a difference of more than 8°C was observed. These temperature differences probably cause different ice structures affecting the drying speed during sublimation. A temperature difference of maximum 13°C was observed in bulk mode during sublimation. When drying in vials, differences of more than 10°C were observed. Gradually, the large temperature differences disappeared during secondary drying and products were transformed into uniformly dry cakes. The experimental data show that the IR camera is a highly versatile on-line monitoring tool for different kinds of freeze-drying processes. © 2014 European Union 103:2088–2097, 2014 PMID:24902839
Emteborg, Håkan; Zeleny, Reinhard; Charoud-Got, Jean; Martos, Gustavo; Lüddeke, Jörg; Schellin, Holger; Teipel, Katharina
2014-07-01
Coupling an infrared (IR) camera to a freeze dryer for on-line monitoring of freeze-drying cycles is described for the first time. Normally, product temperature is measured using a few invasive Pt-100 probes, resulting in poor spatial resolution. To overcome this, an IR camera was placed on a process-scale freeze dryer. Imaging took place every 120 s through a Germanium window comprising 30,000 measurement points obtained contact-free from -40 °C to 25 °C. Results are presented for an empty system, bulk drying of cheese slurry, and drying of 1 mL human serum in 150 vials. During freezing of the empty system, differences of more than 5 °C were measured on the shelf. Adding a tray to the empty system, a difference of more than 8 °C was observed. These temperature differences probably cause different ice structures affecting the drying speed during sublimation. A temperature difference of maximum 13 °C was observed in bulk mode during sublimation. When drying in vials, differences of more than 10 °C were observed. Gradually, the large temperature differences disappeared during secondary drying and products were transformed into uniformly dry cakes. The experimental data show that the IR camera is a highly versatile on-line monitoring tool for different kinds of freeze-drying processes. © 2014 European Union.
Use of a digital camera to monitor the growth and nitrogen status of cotton.
Jia, Biao; He, Haibing; Ma, Fuyu; Diao, Ming; Jiang, Guiying; Zheng, Zhong; Cui, Jin; Fan, Hua
2014-01-01
The main objective of this study was to develop a nondestructive method for monitoring cotton growth and N status using a digital camera. Digital images were taken of the cotton canopies between emergence and full bloom. The green and red values were extracted from the digital images and then used to calculate canopy cover. The values of canopy cover were closely correlated with the normalized difference vegetation index and the ratio vegetation index and were measured using a GreenSeeker handheld sensor. Models were calibrated to describe the relationship between canopy cover and three growth properties of the cotton crop (i.e., aboveground total N content, LAI, and aboveground biomass). There were close, exponential relationships between canopy cover and three growth properties. And the relationships for estimating cotton aboveground total N content were most precise, the coefficients of determination (R(2)) value was 0.978, and the root mean square error (RMSE) value was 1.479 g m(-2). Moreover, the models were validated in three fields of high-yield cotton. The result indicated that the best relationship between canopy cover and aboveground total N content had an R(2) value of 0.926 and an RMSE value of 1.631 g m(-2). In conclusion, as a near-ground remote assessment tool, digital cameras have good potential for monitoring cotton growth and N status.
Design criteria for a high energy Compton Camera and possible application to targeted cancer therapy
NASA Astrophysics Data System (ADS)
Conka Nurdan, T.; Nurdan, K.; Brill, A. B.; Walenta, A. H.
2015-07-01
The proposed research focuses on the design criteria for a Compton Camera with high spatial resolution and sensitivity, operating at high gamma energies and its possible application for molecular imaging. This application is mainly on the detection and visualization of the pharmacokinetics of tumor targeting substances specific for particular cancer sites. Expected high resolution (< 0.5 mm) permits monitoring the pharmacokinetics of labeled gene constructs in vivo in small animals with a human tumor xenograft which is one of the first steps in evaluating the potential utility of a candidate gene. The additional benefit of high sensitivity detection will be improved cancer treatment strategies in patients based on the use of specific molecules binding to cancer sites for early detection of tumors and identifying metastasis, monitoring drug delivery and radionuclide therapy for optimum cell killing at the tumor site. This new technology can provide high resolution, high sensitivity imaging of a wide range of gamma energies and will significantly extend the range of radiotracers that can be investigated and used clinically. The small and compact construction of the proposed camera system allows flexible application which will be particularly useful for monitoring residual tumor around the resection site during surgery. It is also envisaged as able to test the performance of new drug/gene-based therapies in vitro and in vivo for tumor targeting efficacy using automatic large scale screening methods.
Beam line shielding calculations for an Electron Accelerator Mo-99 production facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mocko, Michal
2016-05-03
The purpose of this study is to evaluate the photon and neutron fields in and around the latest beam line design for the Mo-99 production facility. The radiation dose to the beam line components (quadrupoles, dipoles, beam stops and the linear accelerator) are calculated in the present report. The beam line design assumes placement of two cameras: infra red (IR) and optical transition radiation (OTR) for continuous monitoring of the beam spot on target during irradiation. The cameras will be placed off the beam axis offset in vertical direction. We explored typical shielding arrangements for the cameras and report themore » resulting neutron and photon dose fields.« less
A reaction-diffusion-based coding rate control mechanism for camera sensor networks.
Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki
2010-01-01
A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.
NASA Astrophysics Data System (ADS)
Ćwiok, M.; Dominik, W.; Małek, K.; Mankiewicz, L.; Mrowca-Ciułacz, J.; Nawrocki, K.; Piotrowski, L. W.; Sitek, P.; Sokołowski, M.; Wrochna, G.; Żarnecki, A. F.
2007-06-01
Experiment “Pi of the Sky” is designed to search for prompt optical emission from GRB sources. 32 CCD cameras covering 2 steradians will monitor the sky continuously. The data will be analysed on-line in search for optical flashes. The prototype with 2 cameras operated at Las Campanas (Chile) since 2004 has recognised several outbursts of flaring stars and has given limits for a few GRB.
Repurposing video recordings for structure motion estimations
NASA Astrophysics Data System (ADS)
Khaloo, Ali; Lattanzi, David
2016-04-01
Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.
NASA Astrophysics Data System (ADS)
Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.
2017-12-01
In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.
Heumann, F.K.; Wilkinson, J.C.; Wooding, D.R.
1997-12-16
A remote appliance for supporting a tool for performing work at a work site on a substantially circular bore of a work piece and for providing video signals of the work site to a remote monitor comprises: a base plate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the base plate and positioned to roll against the bore of the work piece when the base plate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the base plate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the base plate such that the working end of the tool is positioned on the inner face side of the base plate; a camera for providing video signals of the work site to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the base plate, the camera holding means being adjustably attached to the outer face of the base plate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris. 5 figs.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
A view of the ET camera on STS-112
NASA Technical Reports Server (NTRS)
2002-01-01
KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
A view of the ET camera on STS-112
NASA Technical Reports Server (NTRS)
2002-01-01
KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
STS-31 MS Sullivan and Pilot Bolden monitor SE 82-16 Ion Arc on OV-103 middeck
NASA Technical Reports Server (NTRS)
1990-01-01
STS-31 Mission Specialist (MS) Kathryn D. Sullivan monitors and advises ground controllers of the activity inside the Student Experiment (SE) 82-16, Ion arc - studies of the effects of microgravity and a magnetic field on an electric arc, mounted in front of the middeck lockers aboard Discovery, Orbiter Vehicle (OV) 103. Pilot Charles F. Bolden uses a video camera and an ARRIFLEX motion picture camera to record the activity inside the special chamber. A sign in front of the experiment reads 'SSIP 82-16 Greg's Experiment Happy Graduation from STS-31.' SSIP stands for Shuttle Student Involvement Program. Gregory S. Peterson who developed the experiment (Greg's Experiment) is a student at Utah State University and monitored the experiment's operation from JSC's Mission Control Center (MCC) during the flight. Decals displayed in the background on the orbiter galley represent the Hubble Space Telescope (HST), the United States (U.S.) Naval Reserve, Navy Oceanographers, U.S. Navy, and Univer
Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration
NASA Astrophysics Data System (ADS)
Chen, Mingzhi; Zhao, Yongqian; Hai, Hua; Yu, Chengxin; Zhang, Guojian
2018-01-01
In order to monitor the dynamic deformation of steel structure in the real-time, digital photography is used in this paper. Firstly, the grid method is used correct the distortion of digital camera. Then the digital cameras are used to capture the initial and experimental images of steel structure to obtain its relative deformation. PST-TBPM (photographing scale transformation-time baseline parallax method) is used to eliminate the parallax error and convert the pixel change value of deformation points into the actual displacement value. In order to visualize the deformation trend of steel structure, the deformation curves are drawn based on the deformation value of deformation points. Results show that the average absolute accuracy and relative accuracy of PST-TBPM are 0.28mm and 1.1‰, respectively. Digital photography used in this study can meet accuracy requirements of steel structure deformation monitoring. It also can warn the safety of steel structure and provide data support for managers’ safety decisions based on the deformation curves on site.
NASA Astrophysics Data System (ADS)
Liu, W.; Wang, H.; Liu, D.; Miu, Y.
2018-05-01
Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.
Monitoring Coating Thickness During Plasma Spraying
NASA Technical Reports Server (NTRS)
Miller, Robert A.
1990-01-01
High-resolution video measures thickness accurately without interfering with process. Camera views cylindrical part through filter during plasma spraying. Lamp blacklights part, creating high-contrast silhouette on video monitor. Width analyzer counts number of lines in image of part after each pass of spray gun. Layer-by-layer measurements ensure adequate coat built up without danger of exceeding required thickness.
ERIC Educational Resources Information Center
Porter, Randall C.
1999-01-01
Discusses technology and equipment requirements for developing an effective distance-learning classroom. Areas covered include cabling, the control booth, microphones, acoustics, lighting, heating and air conditioning, cameras, video monitors, staffing, and power requirements. (GR)
NASA Technical Reports Server (NTRS)
1988-01-01
The charters of Freedom Monitoring System will periodically assess the physical condition of the U.S. Constitution, Declaration of Independence and Bill of Rights. Although protected in helium filled glass cases, the documents are subject to damage from light vibration and humidity. The photometer is a CCD detector used as the electronic film for the camera system's scanning camera which mechanically scans the document line by line and acquires a series of images, each representing a one square inch portion of the document. Perkin-Elmer Corporation's photometer is capable of detecting changes in contrast, shape or other indicators of degradation with 5 to 10 times the sensitivity of the human eye. A Vicom image processing computer receives the data from the photometer stores it and manipulates it, allowing comparison of electronic images over time to detect changes.
Motion Imagery and Robotics Application (MIRA)
NASA Technical Reports Server (NTRS)
Martinez, Lindolfo; Rich, Thomas
2011-01-01
Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.
Adjustment of multi-CCD-chip-color-camera heads
NASA Astrophysics Data System (ADS)
Guyenot, Volker; Tittelbach, Guenther; Palme, Martin
1999-09-01
The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.
Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A
2017-07-25
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred
2014-06-01
Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binder, Gary A.; /Caltech /SLAC
2010-08-25
In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images frommore » the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.« less
Spinosa, Emanuele; Roberts, David A.
2017-01-01
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553
Ambitious Survey Spots Stellar Nurseries
NASA Astrophysics Data System (ADS)
2010-08-01
Astronomers scanning the skies as part of ESO's VISTA Magellanic Cloud survey have now obtained a spectacular picture of the Tarantula Nebula in our neighbouring galaxy, the Large Magellanic Cloud. This panoramic near-infrared view captures the nebula itself in great detail as well as the rich surrounding area of sky. The image was obtained at the start of a very ambitious survey of our neighbouring galaxies, the Magellanic Clouds, and their environment. The leader of the survey team, Maria-Rosa Cioni (University of Hertfordshire, UK) explains: "This view is of one of the most important regions of star formation in the local Universe - the spectacular 30 Doradus star-forming region, also called the Tarantula Nebula. At its core is a large cluster of stars called RMC 136, in which some of the most massive stars known are located." ESO's VISTA telescope [1] is a new survey telescope at the Paranal Observatory in Chile (eso0949). VISTA is equipped with a huge camera that detects light in the near-infrared part of the spectrum, revealing a wealth of detail about astronomical objects that gives us insight into the inner workings of astronomical phenomena. Near-infrared light has a longer wavelength than visible light and so we cannot see it directly for ourselves, but it can pass through much of the dust that would normally obscure our view. This makes it particularly useful for studying objects such as young stars that are still enshrouded in the gas and dust clouds from which they formed. Another powerful aspect of VISTA is the large area of the sky that its camera can capture in each shot. This image is the latest view from the VISTA Magellanic Cloud Survey (VMC). The project will scan a vast area - 184 square degrees of the sky (corresponding to almost one thousand times the apparent area of the full Moon) including our neighbouring galaxies the Large and Small Magellanic Clouds. The end result will be a detailed study of the star formation history and three-dimensional geometry of the Magellanic system. Chris Evans from the VMC team adds: "The VISTA images will allow us to extend our studies beyond the inner regions of the Tarantula into the multitude of smaller stellar nurseries nearby, which also harbour a rich population of young and massive stars. Armed with the new, exquisite infrared images, we will be able to probe the cocoons in which massive stars are still forming today, while also looking at their interaction with older stars in the wider region." The wide-field image shows a host of different objects. The bright area above the centre is the Tarantula Nebula itself, with the RMC 136 cluster of massive stars in its core. To the left is the NGC 2100 star cluster. To the right is the tiny remnant of the supernova SN1987A (eso1032). Below the centre are a series of star-forming regions including NGC 2080 - nicknamed the "Ghost Head Nebula" - and the NGC 2083 star cluster. The VISTA Magellanic Cloud Survey is one of six huge near-infrared surveys of the southern sky that will take up most of the first five years of operations of VISTA. Notes [1] VISTA ― the Visible and Infrared Survey Telescope for Astronomy ― is the newest telescope at ESO's Paranal Observatory in northern Chile. VISTA is a survey telescope working at near-infrared wavelengths and is the world's largest survey telescope. Its large mirror, wide field of view and very sensitive detectors will reveal a completely new view of the southern sky. The telescope is housed on the peak adjacent to the one hosting ESO's Very Large Telescope (VLT) and shares the same exceptional observing conditions. VISTA has a main mirror that is 4.1 m across. In photographic terms it can be thought of as a 67-megapixel digital camera with a 13 000 mm f/3.25 mirror lens. More information ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the world's largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras
NASA Astrophysics Data System (ADS)
Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich
2018-01-01
A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.
Feral Cattle in the White Rock Canyon Reserve at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hathcock, Charles D.; Hansen, Leslie A.
2014-03-27
At the request of the Los Alamos Field Office (the Field Office), Los Alamos National Security (LANS) biologists placed remote-triggered wildlife cameras in and around the mouth of Ancho Canyon in the White Rock Canyon Reserve (the Reserve) to monitor use by feral cattle. The cameras were placed in October 2012 and retrieved in January 2013. Two cameras were placed upstream in Ancho Canyon away from the Rio Grande along the perennial flows from Ancho Springs, two cameras were placed at the north side of the mouth to Ancho Canyon along the Rio Grande, and two cameras were placed atmore » the south side of the mouth to Ancho Canyon along the Rio Grande. The cameras recorded three different individual feral cows using this area as well as a variety of local native wildlife. This report details our results and issues associated with feral cattle in the Reserve. Feral cattle pose significant risks to human safety, impact cultural and biological resources, and affect the environmental integrity of the Reserve. Regional stakeholders have communicated to the Field Office that they support feral cattle removal.« less
VizieR Online Data Catalog: Astrometric monitoring of ultracool dwarf binaries (Dupuy+, 2017)
NASA Astrophysics Data System (ADS)
Dupuy, T. J.; Liu, M. C.
2017-09-01
In Table 1 we list all 33 binaries in our Keck+CFHT astrometric monitoring sample, along with three other binaries that have published orbit and parallax measurements. We began obtaining resolved Keck AO astrometry in 2007-2008, and we combined our new astrometry with available data in the literature or public archives (e.g., HST and Gemini) to refine our orbital period estimates and thereby our prioritization for Keck observations. We present here new Keck/NIRC2 AO imaging and non-redundant aperture-masking observations, in addition to a re-analysis of our own previously published data and publicly available archival data for our sample binaries. Table 2 gives our measured astrometry and flux ratios for all Keck AO data used in our orbital analysis spanning 2003 Apr 15 to 2016 May 13. In total there are 339 distinct measurements (unique bandpass and epoch for a given target), where 302 of these are direct imaging and 37 are non-redundant aperture masking. Eight of the imaging measurements are from six unpublished archival data sets. See section 3.1.1 for further details. In addition to our Keck AO monitoring, we also obtained data for three T dwarf binaries over a three-year HST program using the Advanced Camera for Surveys (ACS) Wide Field Camera (WFC) in the F814W bandpass. See section 3.1.2 for further details. Many of our sample binaries have HST imaging data in the public archive. We have re-analyzed the available archival data coming from the WFPC2 Planetary Camera (WFPC2-PC1), ACS High Resolution Channel (ACS-HRC), and NICMOS Camera 1 (NICMOS-NIC1). See section 3.1.3 for further details. We present here an updated analysis of our data from the Hawaii Infrared Parallax Program that uses the CFHT facility infrared camera WIRCam. Our observing strategy and custom astrometry pipeline are described in detail in Dupuy & Liu (2012, J/ApJS/201/19). See section 3.2 for further explanations. (10 data files).
Use of a CCD camera for the thermographic study of a transient liquid phase bonding process in steel
NASA Astrophysics Data System (ADS)
Castro, Eduardo H.; Epelbaum, Carlos; Carnero, Angel; Arcondo, Bibiana
2001-03-01
The bonding of steel pieces and the development of novel soldering methods, appropriate to the extended variety of applications of steels nowadays, bring the sensing of temperature an outstanding role in any metallurgical process. Transient liquid phase bonding (TLPB) processes have been successfully employed to join metals, among them steels. A thin layer of metal A, with a liquids temperature TLA, is located between two pieces of metal B, with a liquids temperature TLB higher than TLA. The joining zone is heated up to a temperature T(TLA
Observations of the Perseids 2012 using SPOSH cameras
NASA Astrophysics Data System (ADS)
Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.
2012-09-01
The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories and photometric properties of the processed double-station meteors will be presented at the conference. Furthermore, a first order statistical analysis of the meteors processed during the 2011 and the new 2012 campaigns will be presented [0].
NASA Astrophysics Data System (ADS)
Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier
2016-12-01
The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the NO2 interference with the SO2 spectrum.
An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data
NASA Technical Reports Server (NTRS)
Chandler, Gyanesh
2007-01-01
CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).
2018-05-01
performance nor effectiveness in protecting sea turtles has been documented.This study was the first step in evaluating TTC as a potential replacement for...draghead turtle deflectors. The primary objective was to evaluate and document operational performance of this technology, not effectiveness of reducing...incidental take. TTC operational performance was monitored using underwater camera systems over a short period of time whereas effectiveness for
Economical Video Monitoring of Traffic
NASA Technical Reports Server (NTRS)
Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.
1986-01-01
Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.
NASA Astrophysics Data System (ADS)
Brown, T.; Borevitz, J. O.; Zimmermann, C.
2010-12-01
We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually retrieved at bi-weekly intervals. Our longer-term goal is to make gigapixel time-lapse datasets available online in an interactive interface that layers plant-level phenology data with gigapixel resolution images, genomic sequence data from individual plants with weather and other abitotic sensor data. Co-visualization of all of these data types provides researchers with a powerful new tool for examining complex ecological interactions across scales from the individual to the ecosystem. We will present detailed phenostage data from more than 100 plants of multiple species from our Gigavision timelapse camera at our “Big Blowout East” field site in the Indiana Dunes State Park, IN. This camera has been recording three to four 700 million pixel images a day since February 28, 2010. The camera field of view covers an area of about 7 hectares resulting in an average image resolution of about 1 pixel per centimeter over the entire site. We will also discuss some of the many technological challenges with developing and maintaining these types of hardware systems, collecting quantitative data from gigapixel resolution time-lapse data and effectively managing terabyte-sized datasets of millions of images.
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Coggins, Lewis G; Bacheler, Nathan M; Gwinn, Daniel C
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics.
Coggins, Lewis G.; Bacheler, Nathan M.; Gwinn, Daniel C.
2014-01-01
Occupancy models using incidence data collected repeatedly at sites across the range of a population are increasingly employed to infer patterns and processes influencing population distribution and dynamics. While such work is common in terrestrial systems, fewer examples exist in marine applications. This disparity likely exists because the replicate samples required by these models to account for imperfect detection are often impractical to obtain when surveying aquatic organisms, particularly fishes. We employ simultaneous sampling using fish traps and novel underwater camera observations to generate the requisite replicate samples for occupancy models of red snapper, a reef fish species. Since the replicate samples are collected simultaneously by multiple sampling devices, many typical problems encountered when obtaining replicate observations are avoided. Our results suggest that augmenting traditional fish trap sampling with camera observations not only doubled the probability of detecting red snapper in reef habitats off the Southeast coast of the United States, but supplied the necessary observations to infer factors influencing population distribution and abundance while accounting for imperfect detection. We found that detection probabilities tended to be higher for camera traps than traditional fish traps. Furthermore, camera trap detections were influenced by the current direction and turbidity of the water, indicating that collecting data on these variables is important for future monitoring. These models indicate that the distribution and abundance of this species is more heavily influenced by latitude and depth than by micro-scale reef characteristics lending credence to previous characterizations of red snapper as a reef habitat generalist. This study demonstrates the utility of simultaneous sampling devices, including camera traps, in aquatic environments to inform occupancy models and account for imperfect detection when describing factors influencing fish population distribution and dynamics. PMID:25255325
Multisensor system for the protection of critical infrastructure of a seaport
NASA Astrophysics Data System (ADS)
Kastek, Mariusz; Dulski, Rafał; Zyczkowski, Marek; Szustakowski, Mieczysław; Trzaskawka, Piotr; Ciurapinski, Wiesław; Grelowska, Grazyna; Gloza, Ignacy; Milewski, Stanislaw; Listewnik, Karol
2012-06-01
There are many separated infrastructural objects within a harbor area that may be considered "critical", such as gas and oil terminals or anchored naval vessels. Those objects require special protection, including security systems capable of monitoring both surface and underwater areas, because an intrusion into the protected area may be attempted using small surface vehicles (boats, kayaks, rafts, floating devices with weapons and explosives) as well as underwater ones (manned or unmanned submarines, scuba divers). The paper will present the concept of multisensor security system for a harbor protection, capable of complex monitoring of selected critical objects within the protected area. The proposed system consists of a command centre and several different sensors deployed in key areas, providing effective protection from land and sea, with special attention focused on the monitoring of underwater zone. The initial project of such systems will be presented, its configuration and initial tests of the selected components. The protection of surface area is based on medium-range radar and LLTV and infrared cameras. Underwater zone will be monitored by a sonar and acoustic and magnetic barriers, connected into an integrated monitoring system. Theoretical analyses concerning the detection of fast, small surface objects (such as RIB boats) by a camera system and real test results in various weather conditions will also be presented.
Flux of Kilogram-sized Meteoroids from Lunar Impact Monitoring. Supplemental Movies
NASA Technical Reports Server (NTRS)
Suggs, Robert; Cooke, William; Suggs, Ron; McNamara, Heather; Swift, Wesley; Moser, Danielle; Diekmann, Anne
2008-01-01
These videos, and audio accompany the slide presentation "Flux of Kilogram-sized Meteoroids from Lunar Impact Monitoring." The slide presentation reviews the routine lunar impact monitoring that has harvested over 110 impacts in 2 years of observations using telescopes and low-light level video cameras. The night side of the lunar surface provides a large collecting area for detecting these impacts and allows estimation of the flux of meteoroids down to a limiting luminous energy.
Volcanic Cloud and Aerosol Monitor (VOLCAM) for Deep Space Gateway
NASA Astrophysics Data System (ADS)
Krotkov, N.; Bhartia, P. K.; Torres, O.; Li, C.; Sander, S.; Realmuto, V.; Carn, S.; Herman, J.
2018-02-01
We propose complementary ultraviolet (UV) and thermal Infrared (TIR) filter cameras for a dual-purpose whole Earth imaging with complementary natural hazards applications and Earth system science goals.
Sackstein, M
2006-10-01
Over the last five years digital photography has become ubiquitous. For the family photo album, a 4 or 5 megapixel camera costing about 2000 NIS will produce satisfactory results for most people. However, for intra-oral photography the common wisdom holds that only professional photographic equipment is up to the task. Such equipment typically costs around 12,000 NIS and includes the camera body, an attachable macro lens and a ringflash. The following article challenges this conception. Although professional equipment does produce the most exemplary results, a highly effective database of clinical pictures can be compiled even with a "non-professional" digital camera. Since the year 2002, my clinical work has been routinely documented with digital cameras of the Nikon CoolPix series. The advantages are that these digicams are economical both in price and in size and allow easy transport and operation when compared to their expensive and bulky professional counterparts. The details of how to use a non-professional digicam to produce and maintain an effective clinical picture database, for documentation, monitoring, demonstration and professional fulfillment, are described below.
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
Camera traps and activity signs to estimate wild boar density and derive abundance indices.
Massei, Giovanna; Coats, Julia; Lambert, Mark Simon; Pietravalle, Stephane; Gill, Robin; Cowan, Dave
2018-04-01
Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km 2 . Increasing the density of camera traps above nine per km 2 did not increase the precision of the estimate of wild boar density. PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km 2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.
2017-03-01
Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.
Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less
Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.
2010-04-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.
Predators of Greater Sage-Grouse nests identified by video monitoring
Coates, P.S.; Connelly, J.W.; Delehanty, D.J.
2008-01-01
Nest predation is the primary cause of nest failure for Greater Sage-Grouse (Centrocercus urophasianus), but the identity of their nest predators is often uncertain. Confirming the identity of these predators may be useful in enhancing management strategies designed to increase nest success. From 2002 to 2005, we monitored 87 Greater Sage-Grouse nests (camera, N = 55; no camera, N = 32) in northeastern Nevada and south-central Idaho and identified predators at 17 nests, with Common Ravens (Corvus corax) preying on eggs at 10 nests and American badgers (Taxidea taxis) at seven. Rodents were frequently observed at grouse nests, but did not prey on grouse eggs. Because sign left by ravens and badgers was often indistinguishable following nest predation, identifying nest predators based on egg removal, the presence of egg shells, or other sign was not possible. Most predation occurred when females were on nests. Active nest defense by grouse was rare and always unsuccessful. Continuous video monitoring of Sage-Grouse nests permitted unambiguous identification of nest predators. Additional monitoring studies could help improve our understanding of the causes of Sage-Grouse nest failure in the face of land-use changes in the Intermountain West. ?? 2008 Association of Field Ornithologists.
Jachowski, David S.; Katzner, Todd; Rodrigue, Jane L.; Ford, W. Mark
2015-01-01
Conservation of animal migratory movements is among the most important issues in wildlife management. To address this need for landscape-scale monitoring of raptor populations, we developed a novel, baited photographic observation network termed the “Appalachian Eagle Monitoring Program” (AEMP). During winter months of 2008–2012, we partnered with professional and citizen scientists in 11 states in the United States to collect approximately 2.5 million images. To our knowledge, this represents the largest such camera-trap effort to date. Analyses of data collected in 2011 and 2012 revealed complex, often species-specific, spatial and temporal patterns in winter raptor movement behavior as well as spatial and temporal resource partitioning between raptor species. Although programmatic advances in data analysis and involvement are needed, the continued growth of the program has the potential to provide a long-term, cost-effective, range-wide monitoring tool for avian and terrestrial scavengers during the winter season. Perhaps most importantly, by relying heavily on citizen scientists, AEMP has the potential to improve long-term interest and support for raptor conservation and serve as a model for raptor conservation programs in other portions of the world.
DSCOVR Public Release Statement V02
Atmospheric Science Data Center
2017-07-06
... where it performs its primary objective of monitoring the solar wind as well as observing the Earth from sunrise to sunset with two Earth Science sensors: the Earth Polychromatic Imaging Camera (EPIC) and ...
Foam Experiment Hardware are Flown on Microgravity Rocket MAXUS 4
NASA Astrophysics Data System (ADS)
Lockowandt, C.; Löth, K.; Jansson, O.; Holm, P.; Lundin, M.; Schneider, H.; Larsson, B.
2002-01-01
The Foam module was developed by Swedish Space Corporation and was used for performing foam experiments on the sounding rocket MAXUS 4 launched from Esrange 29 April 2001. The development and launch of the module has been financed by ESA. Four different foam experiments were performed, two aqueous foams by Doctor Michele Adler from LPMDI, University of Marne la Vallée, Paris and two non aqueous foams by Doctor Bengt Kronberg from YKI, Institute for Surface Chemistry, Stockholm. The foam was generated in four separate foam systems and monitored in microgravity with CCD cameras. The purpose of the experiment was to generate and study the foam in microgravity. Due to loss of gravity there is no drainage in the foam and the reactions in the foam can be studied without drainage. Four solutions with various stabilities were investigated. The aqueous solutions contained water, SDS (Sodium Dodecyl Sulphate) and dodecanol. The organic solutions contained ethylene glycol a cationic surfactant, cetyl trimethyl ammonium bromide (CTAB) and decanol. Carbon dioxide was used to generate the aqueous foam and nitrogen was used to generate the organic foam. The experiment system comprised four complete independent systems with injection unit, experiment chamber and gas system. The main part in the experiment system is the experiment chamber where the foam is generated and monitored. The chamber inner dimensions are 50x50x50 mm and it has front and back wall made of glass. The front window is used for monitoring the foam and the back window is used for back illumination. The front glass has etched crosses on the inside as reference points. In the bottom of the cell is a glass frit and at the top is a gas in/outlet. The foam was generated by injecting the experiment liquid in a glass frit in the bottom of the experiment chamber. Simultaneously gas was blown through the glass frit and a small amount of foam was generated. This procedure was performed at 10 bar. Then the pressure was lowered in the experiment chamber to approximately 0,1 bar to expand the foam to a dry foam that filled the experiment chamber. The foam was regenerated during flight by pressurise the cell and repeat the foam generation procedures. The module had 4 individual experiment chambers for the four different solutions. The four experiment chambers were controlled individually with individual experiment parameters and procedures. The gas system comprise on/off valves and adjustable valves to control the pressure and the gas flow and liquid flow during foam generation. The gas system can be divided in four sections, each section serving one experiment chamber. The sections are partly connected in two pairs with common inlet and outlet. The two pairs are supplied with a 1l gas bottle each filled to a pressure of 40 bar and a pressure regulator lowering the pressure from 40 bar to 10 bar. Two sections are connected to the same outlet. The gas outlets from the experiment chambers are connected to two symmetrical placed outlets on the outer structure with diffusers not to disturb the g-levels. The foam in each experiment chamber was monitored with one tomography camera and one overview camera (8 CCD cameras in total). The tomography camera is placed on a translation table which makes it possible to move it in the depth direction of the experiment chamber. The video signal from the 8 CCD cameras were stored onboard with two DV recorders. Two video signals were also transmitted to ground for real time evaluation and operation of the experiment. Which camera signal that was transmitted to ground could be selected with telecommands. With help of the tomography system it was possible to take sequences of images of the foam at different depths in the foam. This sequences of images are used for constructing a 3-D model of the foam after flight. The overview camera has a fixed position and a field of view that covers the total experiment chamber. This camera is used for monitoring the generation of foam and the overall behaviour of the foam. The experiment was performed successfully with foam generation in all 4 experiment chambers. Foam was also regenerated during flight with telecommands. The experiment data is under evaluation.
Brown, David M; Juarez, Juan C; Brown, Andrea M
2013-12-01
A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.
Oversampling in virtual visual sensors as a means to recover higher modes of vibration
NASA Astrophysics Data System (ADS)
Shariati, Ali; Schumacher, Thomas
2015-03-01
Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS... as video cameras, digital scanning sonar, and upweller systems; monitoring of sediment quality...
Simultaneous Monitoring of Ballistocardiogram and Photoplethysmogram Using Camera
Shao, Dangdang; Tsow, Francis; Liu, Chenbin; Yang, Yuting; Tao, Nongjian
2017-01-01
We present a noncontact method to measure Ballistocardiogram (BCG) and Photoplethysmogram (PPG) simultaneously using a single camera. The method tracks the motion of facial features to determine displacement BCG, and extracts the corresponding velocity and acceleration BCGs by taking first and second temporal derivatives from the displacement BCG, respectively. The measured BCG waveforms are consistent with those reported in literature and also with those recorded with an accelerometer-based reference method. The method also tracks PPG based on the reflected light from the same facial region, which makes it possible to track both BCG and PPG with the same optics. We verify the robustness and reproducibility of the noncontact method with a small pilot study with 23 subjects. The presented method is the first demonstration of simultaneous BCG and PPG monitoring without wearing any extra equipment or marker by the subject. PMID:27362754
Tracking subpixel targets in domestic environments
NASA Astrophysics Data System (ADS)
Govinda, V.; Ralph, J. F.; Spencer, J. W.; Goulermas, J. Y.; Smith, D. H.
2006-05-01
In recent years, closed circuit cameras have become a common feature of urban life. There are environments however where the movement of people needs to be monitored but high resolution imaging is not necessarily desirable: rooms where privacy is required and the occupants are not comfortable with the perceived intrusion. Examples might include domiciliary care environments, prisons and other secure facilities, and even large open plan offices. This paper discusses algorithms that allow activity within this type of sensitive environment to be monitored using data from low resolution cameras (ones where all objects of interest are sub-pixel and cannot be resolved) and other non-intrusive sensors. The algorithms are based on techniques originally developed for wide area reconnaissance and surveillance applications. Of particular importance is determining the minimum spatial resolution that is required to provide a specific level of coverage and reliability.
Versatile Mobile and Stationary Low-Cost Approaches for Hydrological Measurements
NASA Astrophysics Data System (ADS)
Kröhnert, M.; Eltner, A.
2018-05-01
In the last decades, an increase in the number of extreme precipitation events has been observed, which leads to increasing risks for flash floods and landslides. Thereby, conventional gauging stations are indispensable for monitoring and prediction. However, they are expensive in construction, management, and maintenance. Thus, density of observation networks is rather low, leading to insufficient spatio-temporal resolution to capture hydrological extreme events that occur with short response times especially in small-scale catchments. Smaller creeks and rivers require permanent observation, as well, to allow for a better understanding of the underlying processes and to enhance forecasting reliability. Today's smartphones with inbuilt cameras, positioning sensors and powerful processing units may serve as wide-spread measurement devices for event-based water gauging during floods. With the aid of volunteered geographic information (VGI), the hydrological network of water gauges can be highly densified in its spatial and temporal domain even for currently unobserved catchments. Furthermore, stationary low-cost solutions based on Raspberry Pi imaging systems are versatile for permanent monitoring of hydrological parameters. Both complementary systems, i.e. smartphone and Raspberry Pi camera, share the same methodology to extract water levels automatically, which is explained in the paper in detail. The annotation of 3D reference data by 2D image measurements is addressed depending on camera setup and river section to be monitored. Accuracies for water stage measurements are in range of several millimetres up to few centimetres.
Calibration of the Auger Fluorescence Telescopes
NASA Astrophysics Data System (ADS)
Klages, H.; Pierre Auger Observatory Collaboration
Thirty fluorescence telescopes in four stations will overlook the detector array of the southern hemisphere experiment of the Pierre Auger project. The main aim of these telescopes is tracking of EHE air showers, measurement of the longitudinal shower development (Xmax) and determination of the absolute energy of EHE events. A telescope camera contains 440 PMTs each covering a 1.5 x 1.5 degree pixel of the sky. The response of every pixel is converted into the number of charged particles at the observed part of the shower. This reconstruction includes the shower/observer geometry and the details of the atmospheric photon production and transport. The remaining experimental task is to convert the ADC counts of the camera pixel electronics into the light flux entering the Schmidt aperture. Three types of calibration and control are necessary : a) Monitoring of time dependent variations has to be performed for all parts of the optics and for all pixels frequently. Common illumination for all pixels of a camera allows the detection of individual deviations. Properties of windows, filters and mirrors have to be measured separately. b) Differences in pixel-to-pixel efficiency are mainly due to PMT gain and to differences in effective area (camera shadow, mirror size limits). Homogeneous and isotropic illumination will enable cross calibration. c) An absolute calibration has to be performed once in a while using trusted light monitors. The calibration methods used for the Pierre Auger FD telescopes in Argentina are discussed.
THE VMC SURVEY. XIX. CLASSICAL CEPHEIDS IN THE SMALL MAGELLANIC CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ripepi, V.; Marconi, M.; Moretti, M. I.
2016-06-01
The “VISTA near-infrared YJK {sub s} survey of the Magellanic Clouds System” (VMC) is collecting deep K {sub s}-band time-series photometry of pulsating variable stars hosted by the two Magellanic Clouds and their connecting Bridge. In this paper, we present Y , J , K {sub s} light curves for a sample of 4172 Small Magellanic Cloud (SMC) Classical Cepheids (CCs). These data, complemented with literature V values, allowed us to construct a variety of period–luminosity (PL), period–luminosity–color (PLC), and period–Wesenheit (PW) relationships, which are valid for Fundamental (F), First Overtone (FO), and Second Overtone (SO) pulsators. The relations involvingmore » the V , J , K {sub s} bands are in agreement with their counterparts in the literature. As for the Y band, to our knowledge, we present the first CC PL, PW, and PLC relations ever derived using this filter. We also present the first near–infrared PL, PW, and PLC relations for SO pulsators to date. We used PW( V , K {sub s}) to estimate the relative SMC–LMC distance and, in turn, the absolute distance to the SMC. For the former quantity, we find a value of Δ μ = 0.55 ± 0.04 mag, which is in rather good agreement with other evaluations based on CCs, but significantly larger than the results obtained from older population II distance indicators. This discrepancy might be due to the different geometric distributions of young and old tracers in both Clouds. As for the absolute distance to the SMC, our best estimates are μ {sub SMC} = 19.01 ± 0.05 mag and μ {sub SMC} = 19.04 ± 0.06 mag, based on two distance measurements to the LMC which rely on accurate CC and eclipsing Cepheid binary data, respectively.« less
e-phenology: monitoring leaf phenology and tracking climate changes in the tropics
NASA Astrophysics Data System (ADS)
Morellato, Patrícia; Alberton, Bruna; Almeida, Jurandy; Alex, Jefersson; Mariano, Greice; Torres, Ricardo
2014-05-01
The e-phenology is a multidisciplinary project combining research in Computer Science and Phenology. Its goal is to attack theoretical and practical problems involving the use of new technologies for remote phenological observation aiming to detect local environmental changes. It is geared towards three objectives: (a) the use of new technologies of environmental monitoring based on remote phenology monitoring systems; (b) creation of a protocol for a Brazilian long term phenology monitoring program and for the integration across disciplines, advancing our knowledge of seasonal responses within tropics to climate change; and (c) provide models, methods and algorithms to support management, integration and analysis of data of remote phenology systems. The research team is composed by computer scientists and biology researchers in Phenology. Our first results include: Phenology towers - We set up the first phenology tower in our core cerrado-savanna 1 study site at Itirapina, São Paulo, Brazil. The tower received a complete climatic station and a digital camera. The digital camera is set up to take daily sequence of images (five images per hour, from 6:00 to 18:00 h). We set up similar phenology towers with climatic station and cameras in five more sites: cerrado-savanna 2 (Pé de Gigante, SP), cerrado grassland 3 (Itirapina, SP), rupestrian fields 4 ( Serra do Cipo, MG), seasonal forest 5 (Angatuba, SP) and Atlantic raiforest 6 (Santa Virginia, SP). Phenology database - We finished modeling and validation of a phenology database that stores ground phenology and near-remote phenology, and we are carrying out the implementation with data ingestion. Remote phenology and image processing - We performed the first analyses of the cerrado sites 1 to 4 phenology derived from digital images. Analysis were conducted by extracting color information (RGB Red, Green and Blue color channels) from selected parts of the image named regions of interest (ROI). using the green color channel. We analyzed a daily sequence of images (6:00 to 18:00 h). Our results are innovative and indicate the great variation in color change response for tropical trees. We validate the camera phenology with our on the ground direct observation in the core cerrado site 1. We are developing a Image processing software to authomatic process the digital images and to generate the time series for further analyses. New techniques and image features have been used to extract seasonal features from data and for data processing, such as machine learning and visual rhythms. Machine learning was successful applied to identify similar species within the image. Visual rhythms show up as a new analytic tool for phenological interpretation. Next research steps include the analyses of longer data series, correlation with local climatic data, analyses and comparison of patterns among different vegetation sites, prepare a compressive protocol for digital camera phenology and develop new technologies to access vegetation changes using digital cameras. Support: FAPESP-Micorsoft Research, CNPq, CAPES.
García-Tejero, Iván Francisco; Ortega-Arévalo, Carlos José; Iglesias-Contreras, Manuel; Moreno, José Manuel; Souza, Luciene; Tavira, Simón Cuadros; Durán-Zuazo, Víctor Hugo
2018-03-31
Different tools are being implemented in order to improve the water management in agricultural irrigated areas of semiarid environments. Thermography has been progressively introduced as a promising technique for irrigation scheduling and the assessing of crop-water status, especially when deficit irrigation is being implemented. However, an important limitation is related to the cost of the actual cameras, this being a severe limitation to its practical usage by farmers and technicians. This work evaluates the potential and the robustness of a thermal imaging camera that is connected to smartphone (Flir One) recently developed by Flir Systems Inc. as a first step to assess the crop water status. The trial was developed in mature almond ( Prunus dulcis Mill.) trees that are subjected to different irrigation treatments. Thermal information obtained by the Flir One camera was deal with the thermal information obtained with a conventional Thermal Camera (Flir SC660) with a high resolution, and subsequently, confronted with other related plant physiological parameters (leaf water potential, Ψ leaf , and stomatal conductance, g s ). Thermal imaging camera connected to smartphone provided useful information in estimating the crop-water status in almond trees, being a potential promising tool to accelerate the monitoring process and thereby enhance water-stress management of almond orchards.
García-Tejero, Iván Francisco; Ortega-Arévalo, Carlos José; Iglesias-Contreras, Manuel; Moreno, José Manuel; Souza, Luciene; Tavira, Simón Cuadros; Durán-Zuazo, Víctor Hugo
2018-01-01
Different tools are being implemented in order to improve the water management in agricultural irrigated areas of semiarid environments. Thermography has been progressively introduced as a promising technique for irrigation scheduling and the assessing of crop-water status, especially when deficit irrigation is being implemented. However, an important limitation is related to the cost of the actual cameras, this being a severe limitation to its practical usage by farmers and technicians. This work evaluates the potential and the robustness of a thermal imaging camera that is connected to smartphone (Flir One) recently developed by Flir Systems Inc. as a first step to assess the crop water status. The trial was developed in mature almond (Prunus dulcis Mill.) trees that are subjected to different irrigation treatments. Thermal information obtained by the Flir One camera was deal with the thermal information obtained with a conventional Thermal Camera (Flir SC660) with a high resolution, and subsequently, confronted with other related plant physiological parameters (leaf water potential, Ψleaf, and stomatal conductance, gs). Thermal imaging camera connected to smartphone provided useful information in estimating the crop-water status in almond trees, being a potential promising tool to accelerate the monitoring process and thereby enhance water-stress management of almond orchards. PMID:29614740
Using Engineering Cameras on Mars Landers and Rovers to Retrieve Atmospheric Dust Loading
NASA Astrophysics Data System (ADS)
Wolfe, C. A.; Lemmon, M. T.
2014-12-01
Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera that included a neutral density filter. Direct images of the Sun provide the ability to measure extinction by dust and ice in the atmosphere. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as lander and rover engineering cameras. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on archival engineering camera data.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
In-camera video-stream processing for bandwidth reduction in web inspection
NASA Astrophysics Data System (ADS)
Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.
1996-02-01
Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.
2002-09-26
KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
2002-09-26
KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
2002-09-26
KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.
Video-based beam position monitoring at CHESS
NASA Astrophysics Data System (ADS)
Revesz, Peter; Pauling, Alan; Krawczyk, Thomas; Kelly, Kevin J.
2012-10-01
CHESS has pioneered the development of X-ray Video Beam Position Monitors (VBPMs). Unlike traditional photoelectron beam position monitors that rely on photoelectrons generated by the fringe edges of the X-ray beam, with VBPMs we collect information from the whole cross-section of the X-ray beam. VBPMs can also give real-time shape/size information. We have developed three types of VBPMs: (1) VBPMs based on helium luminescence from the intense white X-ray beam. In this case the CCD camera is viewing the luminescence from the side. (2) VBPMs based on luminescence of a thin (~50 micron) CVD diamond sheet as the white beam passes through it. The CCD camera is placed outside the beam line vacuum and views the diamond fluorescence through a viewport. (3) Scatter-based VBPMs. In this case the white X-ray beam passes through a thin graphite filter or Be window. The scattered X-rays create an image of the beam's footprint on an X-ray sensitive fluorescent screen using a slit placed outside the beam line vacuum. For all VBPMs we use relatively inexpensive 1.3 Mega-pixel CCD cameras connected via USB to a Windows host for image acquisition and analysis. The VBPM host computers are networked and provide live images of the beam and streams of data about the beam position, profile and intensity to CHESS's signal logging system and to the CHESS operator. The operational use of VBPMs showed great advantage over the traditional BPMs by providing direct visual input for the CHESS operator. The VBPM precision in most cases is on the order of ~0.1 micron. On the down side, the data acquisition frequency (50-1000ms) is inferior to the photoelectron based BPMs. In the future with the use of more expensive fast cameras we will be able create VBPMs working in the few hundreds Hz scale.
Who are the important predators of sea turtle nests at Wreck Rock beach?
Booth, David T.
2017-01-01
Excessive sea turtle nest predation is a problem for conservation management of sea turtle populations. This study assessed predation on nests of the endangered loggerhead sea turtle (Caretta caretta) at Wreck Rock beach adjacent to Deepwater National Park in Southeast Queensland, Australia after a control program for feral foxes was instigated. The presence of predators on the nesting dune was evaluated by tracking plots (2 × 1 m) every 100 m along the dune front. There were 21 (2014–2015) and 41 (2015–2016) plots established along the dune, and these were monitored for predator tracks daily over three consecutive months in both nesting seasons. Predator activities at nests were also recorded by the presence of tracks on top of nests until hatchlings emerged. In addition, camera traps were set to record the predator activity around selected nests. The tracks of the fox (Vulpes vulpes) and goanna (Varanus spp) were found on tracking plots. Tracking plots, nest tracks and camera traps indicated goanna abundance varied strongly between years. Goannas were widely distributed along the beach and had a Passive Activity Index (PAI) (0.31 in 2014–2015 and 0.16 in 2015–2016) approximately seven times higher than that of foxes (PAI 0.04 in 2014–2015 and 0.02 in 2015–2016). Five hundred and twenty goanna nest visitation events were recorded by tracks but no fox tracks were found at turtle nests. Camera trap data indicated that yellow-spotted goannas (Varanus panoptes) appeared at loggerhead turtle nests more frequently than lace monitors (V. varius) did, and further that lace monitors only predated nests previously opened by yellow-spotted goannas. No foxes were recorded at nests with camera traps. This study suggests that large male yellow-spotted goannas are the major predator of sea turtle nests at the Wreck Rock beach nesting aggregation and that goanna activity varies between years. PMID:28674666
Evaluation of Acquisition Strategies for Image-Based Construction Site Monitoring
NASA Astrophysics Data System (ADS)
Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.
2016-06-01
Construction site monitoring is an essential task for keeping track of the ongoing construction work and providing up-to-date information for a Building Information Model (BIM). The BIM contains the as-planned states (geometry, schedule, costs, ...) of a construction project. For updating, the as-built state has to be acquired repeatedly and compared to the as-planned state. In the approach presented here, a 3D representation of the as-built state is calculated from photogrammetric images using multi-view stereo reconstruction. On construction sites one has to cope with several difficulties like security aspects, limited accessibility, occlusions or construction activity. Different acquisition strategies and techniques, namely (i) terrestrial acquisition with a hand-held camera, (ii) aerial acquisition using a Unmanned Aerial Vehicle (UAV) and (iii) acquisition using a fixed stereo camera pair at the boom of the crane, are tested on three test sites. They are assessed considering the special needs for the monitoring tasks and limitations on construction sites. The three scenarios are evaluated based on the ability of automation, the required effort for acquisition, the necessary equipment and its maintaining, disturbance of the construction works, and on the accuracy and completeness of the resulting point clouds. Based on the experiences during the test cases the following conclusions can be drawn: Terrestrial acquisition has the lowest requirements on the device setup but lacks on automation and coverage. The crane camera shows the lowest flexibility but the highest grade of automation. The UAV approach can provide the best coverage by combining nadir and oblique views, but can be limited by obstacles and security aspects. The accuracy of the point clouds is evaluated based on plane fitting of selected building parts. The RMS errors of the fitted parts range from 1 to a few cm for the UAV and the hand-held scenario. First results show that the crane camera approach has the potential to reach the same accuracy level.
Chaudoin, Ambre L.; Feuerbacher, Olin; Bonar, Scott A.; Barrett, Paul J.
2015-01-01
The monitoring of threatened and endangered fishes in remote environments continues to challenge fisheries biologists. The endangered Devils Hole Pupfish Cyprinodon diabolis, which is confined to a single warm spring in Death Valley National Park, California–Nevada, has recently experienced record declines, spurring renewed conservation and recovery efforts. In February–December 2010, we investigated the timing and frequency of spawning in the species' native habitat by using three survey methods: underwater videography, above-water videography, and in-person surveys. Videography methods incorporated fixed-position, solar-powered cameras to record continuous footage of a shallow rock shelf that Devils Hole Pupfish use for spawning. In-person surveys were conducted from a platform placed above the water's surface. The underwater camera recorded more overall spawning throughout the year (mean ± SE = 0.35 ± 0.06 events/sample) than the above-water camera (0.11 ± 0.03 events/sample). Underwater videography also recorded more peak-season spawning (March: 0.83 ± 0.18 events/sample; April: 2.39 ± 0.47 events/sample) than above-water videography (March: 0.21 ± 0.10 events/sample; April: 0.9 ± 0.32 events/sample). Although the overall number of spawning events per sample did not differ significantly between underwater videography and in-person surveys, underwater videography provided a larger data set with much less variability than data from in-person surveys. Fixed videography was more cost efficient than in-person surveys (\\$1.31 versus \\$605 per collected data-hour), and underwater videography provided more usable data than above-water videography. Furthermore, video data collection was possible even under adverse conditions, such as the extreme temperatures of the region, and could be maintained successfully with few study site visits. Our results suggest that self-contained underwater cameras can be efficient tools for monitoring remote and sensitive aquatic ecosystems.
NASA Astrophysics Data System (ADS)
Harrild, M.; Webley, P. W.; Dehn, J.
2016-12-01
An effective early warning system to detect volcanic activity is an invaluable tool, but often very expensive. Detecting and monitoring precursory events, thermal signatures, and ongoing eruptions in near real-time is essential, but conventional methods are often logistically challenging, expensive, and difficult to maintain. Our investigation explores the use of `off the shelf' webcams and low-light cameras, operating in the visible to near-infrared portions of the electromagnetic spectrum, to detect and monitor volcanic incandescent activity. Large databases of webcam imagery already exist at institutions around the world, but are often extremely underutilised and we aim to change this. We focus on the early detection of thermal signatures at volcanoes, using automated scripts to analyse individual images for changes in pixel brightness, allowing us to detect relative changes in thermally incandescent activity. Primarily, our work focuses on freely available streams of webcam images from around the world, which we can download and analyse in near real-time. When changes in activity are detected, an alert is sent to the users informing them of the changes in activity and a need for further investigation. Although relatively rudimentary, this technique provides constant monitoring for volcanoes in remote locations and developing nations, where it is not financially viable to deploy expensive equipment. We also purchased several of our own cameras, which were extensively tested in controlled laboratory settings with a black body source to determine their individual spectral response. Our aim is to deploy these cameras at active volcanoes knowing exactly how they will respond to varying levels of incandescence. They are ideal for field deployments as they are cheap (0-1,000), consume little power, are easily replaced, and can provide telemetered near real-time data. Data from Shiveluch volcano, Russia and our spectral response lab experiments are presented here.
STS-39 MS Veach monitors AFP-675 panel on OV-103's aft flight deck
1991-05-06
STS039-09-036 (28 April-6 May 1991) --- Astronaut Charles L. (Lacy) Veach monitors experiment data on the aft flight deck of the Earth-orbiting Discovery. The photograph was taken with a 35mm camera. Veach and six other NASA astronauts spent over eight days in space busily collecting data for this mission, dedicated to the Department of Defense.
NASA Technical Reports Server (NTRS)
Wales, R. O. (Editor)
1981-01-01
The overall mission and spacecraft systems, testing, and operations are summarized. The mechanical subsystems are reviewed, encompassing mechanical design requirements; separation and deployment mechanisms; design and performance evaluation; and the television camera reflector monitor. Thermal control and contamination are discussed in terms of thermal control subsystems, design validation, subsystems performance, the advanced flight experiment, and the quartz-crystal microbalance contamination monitor.
Optimizing Orbital Debris Monitoring with Optical Telescopes
2010-09-01
poses an increasing risk to manned space missions and operational satellites ; however, the majority of debris large enough to cause catastrophic...cameras hosted on GEO- based satellites for monitoring GEO. Performance analysis indicates significant potential contributions of these systems as a...concerns over the long term-viability of the space environment and the resulting economic impacts. The 2007 China anti- satellite test and the 2009
21 CFR 886.5820 - Closed-circuit television reading system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of a lens, video camera, and video monitor that is intended for use by a patient who has subnormal vision to magnify reading material. (b) Classification. Class I (general controls). The device is exempt...
BLM Unmanned Aircraft Systems (UAS) Resource Management Operations
NASA Astrophysics Data System (ADS)
Hatfield, M. C.; Breen, A. L.; Thurau, R.
2016-12-01
The Department of the Interior Bureau of Land Management is funding research at the University of Alaska Fairbanks to study Unmanned Aircraft Systems (UAS) Resource Management Operations. In August 2015, the team conducted flight research at UAF's Toolik Field Station (TFS). The purpose was to determine the most efficient use of small UAS to collect low-altitude airborne digital stereo images, process the stereo imagery into close-range photogrammetry products, and integrate derived imagery products into the BLM's National Assessment, Inventory and Monitoring (AIM) Strategy. The AIM Strategy assists managers in answering questions of land resources at all organizational levels and develop management policy at regional and national levels. In Alaska, the BLM began to implement its AIM strategy in the National Petroleum Reserve-Alaska (NPR-A) in 2012. The primary goals of AIM-monitoring at the NPR-A are to implement an ecological baseline to monitor ecological trends, and to develop a monitoring network to understand the efficacy of management decisions. The long-term AIM strategy also complements other ongoing NPR-A monitoring processes, collects multi-use and multi-temporal data, and supports understanding of ecosystem management strategies in order to implement defensible natural resource management policy. The campaign measured vegetation types found in the NPR-A, using UAF's TFS location as a convenient proxy. The vehicle selected was the ACUASI Ptarmigan, a small hexacopter (based on DJI S800 airframe and 3DR autopilot) capable of carrying a 1.5 kg payload for 15 min for close-range environmental monitoring missions. The payload was a stereo camera system consisting of Sony NEX7's with various lens configurations (16/20/24/35 mm). A total of 77 flights were conducted over a 4 ½ day period, with 1.5 TB of data collected. Mission variables included camera height, UAS speed, transect overlaps, and camera lenses/settings. Invaluable knowledge was gained as to limitations and opportunities for field deployment of UAS relative to local conditions and vegetation type. Future efforts will focus of refining data analysis techniques and further optimizing UAS/sensor combinations and flight profiles.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
Television image compression and small animal remote monitoring
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Jackson, Robert W.
1990-01-01
It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.
Fisheye camera around view monitoring system
NASA Astrophysics Data System (ADS)
Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen
2018-04-01
360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.
The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data
NASA Astrophysics Data System (ADS)
Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex
2017-06-01
The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.
CCTV Coverage Index Based on Surveillance Resolution and Its Evaluation Using 3D Spatial Analysis
Choi, Kyoungah; Lee, Impyeong
2015-01-01
We propose a novel approach to evaluating how effectively a closed circuit television (CCTV) system can monitor a targeted area. With 3D models of the target area and the camera parameters of the CCTV system, the approach produces surveillance coverage index, which is newly defined in this study as a quantitative measure for surveillance performance. This index indicates the proportion of the space being monitored with a sufficient resolution to the entire space of the target area. It is determined by computing surveillance resolution at every position and orientation, which indicates how closely a specific object can be monitored with a CCTV system. We present full mathematical derivation for the resolution, which depends on the location and orientation of the object as well as the geometric model of a camera. With the proposed approach, we quantitatively evaluated the surveillance coverage of a CCTV system in an underground parking area. Our evaluation process provided various quantitative-analysis results, compelling us to examine the design of the CCTV system prior to its installation and understand the surveillance capability of an existing CCTV system. PMID:26389909
NASA Astrophysics Data System (ADS)
Krimmer, J.; Ley, J.-L.; Abellan, C.; Cachemiche, J.-P.; Caponetto, L.; Chen, X.; Dahoumane, M.; Dauvergne, D.; Freud, N.; Joly, B.; Lambert, D.; Lestand, L.; Létang, J. M.; Magne, M.; Mathez, H.; Maxim, V.; Montarou, G.; Morel, C.; Pinto, M.; Ray, C.; Reithinger, V.; Testa, E.; Zoccarato, Y.
2015-07-01
A Compton camera is being developed for the purpose of ion-range monitoring during hadrontherapy via the detection of prompt-gamma rays. The system consists of a scintillating fiber beam tagging hodoscope, a stack of double sided silicon strip detectors (90×90×2 mm3, 2×64 strips) as scatter detectors, as well as bismuth germanate (BGO) scintillation detectors (38×35×30 mm3, 100 blocks) as absorbers. The individual components will be described, together with the status of their characterization.
Solar-Powered Airplane with Cameras and WLAN
NASA Technical Reports Server (NTRS)
Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.;
2004-01-01
An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.
Remote media vision-based computer input device
NASA Astrophysics Data System (ADS)
Arabnia, Hamid R.; Chen, Ching-Yi
1991-11-01
In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
2009-08-01
improved road access and overhead power. The site contains a WISS shelter, five (5) 40’ connex containers, UMTE pedestal, shelter, and a weather ...monitoring station (Figure 3- 1 ). 3.1.8.3 Camera I site consists of a roughly l acre site with semi-improved road access and overhead power. The site...characteristics such as microclimate , soil temperature, and moisture regimes. which in turn influence the type of vegetation that will be found there
Documenting Western Burrowing Owl Reproduction and Activity Patterns Using Motion-Activated Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Derek B.; Greger, Paul D.
We used motion-activated cameras to monitor the reproduction and patterns of activity of the Burrowing Owl (Athene cunicularia) above ground at 45 burrows in south-central Nevada during the breeding seasons of 1999, 2000, 2001, and 2005. The 37 broods, encompassing 180 young, raised over the four years represented an average of 4.9 young per successful breeding pair. Young and adult owls were detected at the burrow entrance at all times of the day and night, but adults were detected more frequently during afternoon/early evening than were young. Motion-activated cameras require less effort to implement than other techniques. Limitations include photographingmore » only a small percentage of owl activity at the burrow; not detecting the actual number of eggs, young, or number fledged; and not being able to track individual owls over time. Further work is also necessary to compare the accuracy of productivity estimates generated from motion-activated cameras with other techniques.« less
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Kinect2 - respiratory movement detection study.
Rihana, Sandy; Younes, Elie; Visvikis, Dimitris; Fayad, Hadi
2016-08-01
Radiotherapy is one of the main cancer treatments. It consists in irradiating tumor cells to destroy them while sparing healthy tissue. The treatment is planned based on Computed Tomography (CT) and is delivered over fractions during several days. One of the main challenges is replacing patient in the same position every day to irradiate the tumor volume while sparing healthy tissues. Many patient positioning techniques are available. They are both invasive and not accurate performed using tattooed marker on the patient's skin aligned with a laser system calibrated in the treatment room or irradiating using X-ray. Currently systems such as Vision RT use two Time of Flight cameras. Time of Flight cameras have the advantage of having a very fast acquisition rate allows the real time monitoring of patient movement and patient repositioning. The purpose of this work is to test the Microsoft Kinect2 camera for potential use for patient positioning and respiration trigging. This type of Time of Flight camera is non-invasive and costless which facilitate its transfer to clinical practice.
NASA Astrophysics Data System (ADS)
Lee, Young Sub; Kim, Jin Su; Deuk Cho, Kyung; Kang, Joo Hyun; Moo Lim, Sang
2015-07-01
We performed imaging and therapy using I-131 trastuzumab and a pinhole collimator attached to a conventional gamma camera for human use in a mouse model. The conventional clinical gamma camera with a 2-mm radius-sized pinhole collimator was used for monitoring the animal model after administration of I-131 trastuzumab The highest and lowest radiation-received organs were osteogenic cells (0.349 mSv/MBq) and skin (0.137 mSv/MBq), respectively. The mean coefficients of variation (%CV) of the effective dose equivalent and effective dose were 0.091 and 0.093 mSv/MBq respectively. We showed the feasibility of the pinholeattached conventional gamma camera for human use for the assessment of dosimetry. Mouse dosimetry and prediction of human dosimetry could be used to provide data for the safety and efficacy of newly developed therapeutic schemes.
Commercially available high-speed system for recording and monitoring vocal fold vibrations.
Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya
2009-12-01
We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.
[Virtual reality in ophthalmological education].
Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J
2001-04-01
We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.
Using remote underwater video to estimate freshwater fish species richness.
Ebner, B C; Morgan, D L
2013-05-01
Species richness records from replicated deployments of baited remote underwater video stations (BRUVS) and unbaited remote underwater video stations (UBRUVS) in shallow (<1 m) and deep (>1 m) water were compared with those obtained from using fyke nets, gillnets and beach seines. Maximum species richness (14 species) was achieved through a combination of conventional netting and camera-based techniques. Chanos chanos was the only species not recorded on camera, whereas Lutjanus argentimaculatus, Selenotoca multifasciata and Gerres filamentosus were recorded on camera in all three waterholes but were not detected by netting. BRUVSs and UBRUVSs provided versatile techniques that were effective at a range of depths and microhabitats. It is concluded that cameras warrant application in aquatic areas of high conservation value with high visibility. Non-extractive video methods are particularly desirable where threatened species are a focus of monitoring or might be encountered as by-catch in net meshes. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Radiation imaging with a new scintillator and a CMOS camera
NASA Astrophysics Data System (ADS)
Kurosawa, S.; Shoji, Y.; Pejchal, J.; Yokota, Y.; Yoshikawa, A.
2014-07-01
A new imaging system consisting of a high-sensitivity complementary metal-oxide semiconductor (CMOS) sensor, a microscope and a new scintillator, Ce-doped Gd3(Al,Ga)5O12 (Ce:GAGG) grown by the Czochralski process, has been developed. The noise, the dark current and the sensitivity of the CMOS camera (ORCA-Flash4.0, Hamamatsu) was revised and compared to a conventional CMOS, whose sensitivity is at the same level as that of a charge coupled device (CCD) camera. Without the scintillator, this system had a good position resolution of 2.1 ± 0.4 μm and we succeeded in obtaining the alpha-ray images using 1-mm thick Ce:GAGG crystal. This system can be applied for example to high energy X-ray beam profile monitor, etc.